Running Suite: Kubernetes e2e suite =================================== Random Seed: 1623693198 - Will randomize all specs Will run 5668 specs Running in parallel across 10 nodes Jun 14 17:53:21.142: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.144: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 14 17:53:21.326: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 17:53:21.378: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 17:53:21.378: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 14 17:53:21.378: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 14 17:53:21.393: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 14 17:53:21.393: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 14 17:53:21.393: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 14 17:53:21.393: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 14 17:53:21.393: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 14 17:53:21.393: INFO: e2e test version: v1.20.7 Jun 14 17:53:21.395: INFO: kube-apiserver version: v1.20.7 Jun 14 17:53:21.396: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.402: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Jun 14 17:53:21.399: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.421: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jun 14 17:53:21.409: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.429: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 14 17:53:21.414: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.436: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 14 17:53:21.416: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.437: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Jun 14 17:53:21.418: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.438: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 14 17:53:21.427: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.487: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Jun 14 17:53:21.422: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.488: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jun 14 17:53:21.443: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.492: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 14 17:53:21.441: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:21.496: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jun 14 17:53:21.655: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.659: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:140 STEP: Creating ConfigMap configmap-6248/configmap-test-b237932e-a921-47c6-8450-9773f22e0b8f STEP: Updating configMap configmap-6248/configmap-test-b237932e-a921-47c6-8450-9773f22e0b8f STEP: Verifying update of ConfigMap configmap-6248/configmap-test-b237932e-a921-47c6-8450-9773f22e0b8f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:21.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6248" for this suite. •SSSSSS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":147,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor Jun 14 17:53:21.666: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.668: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 14 17:53:21.671: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:21.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-2080" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.159 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh Jun 14 17:53:21.818: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.822: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Jun 14 17:53:21.825: INFO: Only supported for providers [gce gke aws local] (not skeleton) [AfterEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:21.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-736" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 Only supported for providers [gce gke aws local] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:38 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:22.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl Jun 14 17:53:22.645: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:22.649: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Jun 14 17:53:22.652: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-1096" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.052 seconds] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Jun 14 17:53:21.659: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.662: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 14 17:53:21.669: INFO: Waiting up to 5m0s for pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482" in namespace "security-context-2249" to be "Succeeded or Failed" Jun 14 17:53:21.671: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482": Phase="Pending", Reason="", readiness=false. Elapsed: 1.802349ms Jun 14 17:53:23.842: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172872793s Jun 14 17:53:25.845: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175927803s Jun 14 17:53:27.848: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178879272s Jun 14 17:53:29.852: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.182363512s STEP: Saw pod success Jun 14 17:53:29.852: INFO: Pod "security-context-051b6524-da6a-420a-bd25-3e4c09009482" satisfied condition "Succeeded or Failed" Jun 14 17:53:29.855: INFO: Trying to get logs from node leguer-worker pod security-context-051b6524-da6a-420a-bd25-3e4c09009482 container test-container: STEP: delete the pod Jun 14 17:53:30.088: INFO: Waiting for pod security-context-051b6524-da6a-420a-bd25-3e4c09009482 to disappear Jun 14 17:53:30.090: INFO: Pod security-context-051b6524-da6a-420a-bd25-3e4c09009482 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:30.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2249" for this suite. • [SLOW TEST:8.535 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:22.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Jun 14 17:53:22.398: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:22.401: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 14 17:53:22.410: INFO: Waiting up to 5m0s for pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4" in namespace "security-context-4349" to be "Succeeded or Failed" Jun 14 17:53:22.412: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952973ms Jun 14 17:53:24.416: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005159953s Jun 14 17:53:26.419: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00827896s Jun 14 17:53:28.428: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017147755s Jun 14 17:53:30.432: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021144262s STEP: Saw pod success Jun 14 17:53:30.432: INFO: Pod "security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4" satisfied condition "Succeeded or Failed" Jun 14 17:53:30.434: INFO: Trying to get logs from node leguer-worker2 pod security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4 container test-container: STEP: delete the pod Jun 14 17:53:30.715: INFO: Waiting for pod security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4 to disappear Jun 14 17:53:30.717: INFO: Pod security-context-fff4296f-f5d2-4e4b-86fd-e6cc20c7e2d4 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:30.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4349" for this suite. • [SLOW TEST:8.360 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:31.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:51 Jun 14 17:53:31.026: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:31.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-5977" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:59 No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:31.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Jun 14 17:53:31.699: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:31.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-6368" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:22.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 STEP: Creating a pod to test downward api env vars Jun 14 17:53:22.740: INFO: Waiting up to 5m0s for pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c" in namespace "downward-api-8879" to be "Succeeded or Failed" Jun 14 17:53:22.742: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457353ms Jun 14 17:53:24.744: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004722766s Jun 14 17:53:26.747: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007435178s Jun 14 17:53:28.750: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010501232s Jun 14 17:53:30.753: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012975462s Jun 14 17:53:32.756: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015948808s STEP: Saw pod success Jun 14 17:53:32.756: INFO: Pod "downward-api-76492a52-9a6b-4b12-958f-2103a785707c" satisfied condition "Succeeded or Failed" Jun 14 17:53:32.758: INFO: Trying to get logs from node leguer-worker2 pod downward-api-76492a52-9a6b-4b12-958f-2103a785707c container dapi-container: STEP: delete the pod Jun 14 17:53:32.770: INFO: Waiting for pod downward-api-76492a52-9a6b-4b12-958f-2103a785707c to disappear Jun 14 17:53:32.772: INFO: Pod downward-api-76492a52-9a6b-4b12-958f-2103a785707c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:32.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8879" for this suite. • [SLOW TEST:10.081 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Jun 14 17:53:21.707: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.710: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 14 17:53:21.719: INFO: Waiting up to 5m0s for pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee" in namespace "security-context-9907" to be "Succeeded or Failed" Jun 14 17:53:21.723: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341484ms Jun 14 17:53:23.842: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123119347s Jun 14 17:53:25.845: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126125302s Jun 14 17:53:27.848: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129426457s Jun 14 17:53:29.852: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132676919s Jun 14 17:53:31.855: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13600168s Jun 14 17:53:33.927: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.20759685s STEP: Saw pod success Jun 14 17:53:33.927: INFO: Pod "security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee" satisfied condition "Succeeded or Failed" Jun 14 17:53:34.025: INFO: Trying to get logs from node leguer-worker pod security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee container test-container: STEP: delete the pod Jun 14 17:53:34.042: INFO: Waiting for pod security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee to disappear Jun 14 17:53:34.045: INFO: Pod security-context-a814b66f-45b5-4259-b9b7-72e1930f09ee no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:34.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9907" for this suite. • [SLOW TEST:12.370 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Jun 14 17:53:21.889: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.892: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 14 17:53:21.900: INFO: Waiting up to 5m0s for pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57" in namespace "security-context-9826" to be "Succeeded or Failed" Jun 14 17:53:21.902: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538356ms Jun 14 17:53:23.906: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005703547s Jun 14 17:53:25.909: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008728369s Jun 14 17:53:27.911: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011026157s Jun 14 17:53:29.914: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014209208s Jun 14 17:53:31.916: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016500403s Jun 14 17:53:33.927: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026684518s STEP: Saw pod success Jun 14 17:53:33.927: INFO: Pod "security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57" satisfied condition "Succeeded or Failed" Jun 14 17:53:34.028: INFO: Trying to get logs from node leguer-worker2 pod security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57 container test-container: STEP: delete the pod Jun 14 17:53:34.043: INFO: Waiting for pod security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57 to disappear Jun 14 17:53:34.046: INFO: Pod security-context-5aa4bbba-e153-4c9e-9e0e-eaeffeca0a57 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:34.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9826" for this suite. • [SLOW TEST:12.186 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":184,"failed":0} SS ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":1,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:31.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 14 17:53:31.927: INFO: Waiting up to 5m0s for pod "security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b" in namespace "security-context-3775" to be "Succeeded or Failed" Jun 14 17:53:31.930: INFO: Pod "security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54666ms Jun 14 17:53:34.028: INFO: Pod "security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100935177s Jun 14 17:53:36.032: INFO: Pod "security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104802431s STEP: Saw pod success Jun 14 17:53:36.032: INFO: Pod "security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b" satisfied condition "Succeeded or Failed" Jun 14 17:53:36.035: INFO: Trying to get logs from node leguer-worker2 pod security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b container test-container: STEP: delete the pod Jun 14 17:53:36.046: INFO: Waiting for pod security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b to disappear Jun 14 17:53:36.051: INFO: Pod security-context-f39d80a2-fb4c-4230-8aaa-c70e278eb14b no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:36.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3775" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":1452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jun 14 17:53:21.716: INFO: Waiting up to 5m0s for pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b" in namespace "security-context-9965" to be "Succeeded or Failed" Jun 14 17:53:21.719: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.278378ms Jun 14 17:53:23.842: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125904777s Jun 14 17:53:25.845: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128866256s Jun 14 17:53:27.848: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132013527s Jun 14 17:53:29.852: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135441571s Jun 14 17:53:31.855: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.138809847s Jun 14 17:53:33.927: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21097013s Jun 14 17:53:35.930: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.213959801s Jun 14 17:53:37.936: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.219547135s STEP: Saw pod success Jun 14 17:53:37.936: INFO: Pod "security-context-45cc5890-7352-4ade-b84e-d0618158c22b" satisfied condition "Succeeded or Failed" Jun 14 17:53:37.939: INFO: Trying to get logs from node leguer-worker pod security-context-45cc5890-7352-4ade-b84e-d0618158c22b container test-container: STEP: delete the pod Jun 14 17:53:37.951: INFO: Waiting for pod security-context-45cc5890-7352-4ade-b84e-d0618158c22b to disappear Jun 14 17:53:37.955: INFO: Pod security-context-45cc5890-7352-4ade-b84e-d0618158c22b no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:37.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9965" for this suite. • [SLOW TEST:16.280 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 14 17:53:38.044: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:34.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 14 17:53:34.265: INFO: Waiting up to 5m0s for pod "security-context-d1696244-037e-482e-adfd-f9b96ee41193" in namespace "security-context-1492" to be "Succeeded or Failed" Jun 14 17:53:34.268: INFO: Pod "security-context-d1696244-037e-482e-adfd-f9b96ee41193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557225ms Jun 14 17:53:36.270: INFO: Pod "security-context-d1696244-037e-482e-adfd-f9b96ee41193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005116472s Jun 14 17:53:38.274: INFO: Pod "security-context-d1696244-037e-482e-adfd-f9b96ee41193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008616284s STEP: Saw pod success Jun 14 17:53:38.274: INFO: Pod "security-context-d1696244-037e-482e-adfd-f9b96ee41193" satisfied condition "Succeeded or Failed" Jun 14 17:53:38.276: INFO: Trying to get logs from node leguer-worker2 pod security-context-d1696244-037e-482e-adfd-f9b96ee41193 container test-container: STEP: delete the pod Jun 14 17:53:38.289: INFO: Waiting for pod security-context-d1696244-037e-482e-adfd-f9b96ee41193 to disappear Jun 14 17:53:38.291: INFO: Pod security-context-d1696244-037e-482e-adfd-f9b96ee41193 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:38.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1492" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":415,"failed":0} Jun 14 17:53:38.301: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:30.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 14 17:53:31.023: INFO: Waiting up to 5m0s for pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b" in namespace "security-context-5866" to be "Succeeded or Failed" Jun 14 17:53:31.025: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.929559ms Jun 14 17:53:33.028: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004683805s Jun 14 17:53:35.033: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009387884s Jun 14 17:53:37.037: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014241863s Jun 14 17:53:39.042: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018388928s Jun 14 17:53:41.045: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022319318s STEP: Saw pod success Jun 14 17:53:41.046: INFO: Pod "security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b" satisfied condition "Succeeded or Failed" Jun 14 17:53:41.048: INFO: Trying to get logs from node leguer-worker pod security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b container test-container: STEP: delete the pod Jun 14 17:53:41.064: INFO: Waiting for pod security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b to disappear Jun 14 17:53:41.067: INFO: Pod security-context-aeeb48aa-ac76-4a70-ba0d-01298b22bb0b no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:41.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5866" for this suite. • [SLOW TEST:10.083 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":2,"skipped":652,"failed":0} Jun 14 17:53:41.077: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:34.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 14 17:53:34.131: INFO: Waiting up to 5m0s for pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a" in namespace "security-context-1850" to be "Succeeded or Failed" Jun 14 17:53:34.134: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615085ms Jun 14 17:53:36.137: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005183615s Jun 14 17:53:38.139: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007979548s Jun 14 17:53:40.143: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01167615s Jun 14 17:53:42.226: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094935592s STEP: Saw pod success Jun 14 17:53:42.226: INFO: Pod "security-context-9e37423c-2057-48ed-b29b-0d521408aa1a" satisfied condition "Succeeded or Failed" Jun 14 17:53:42.229: INFO: Trying to get logs from node leguer-worker pod security-context-9e37423c-2057-48ed-b29b-0d521408aa1a container test-container: STEP: delete the pod Jun 14 17:53:42.241: INFO: Waiting for pod security-context-9e37423c-2057-48ed-b29b-0d521408aa1a to disappear Jun 14 17:53:42.244: INFO: Pod security-context-9e37423c-2057-48ed-b29b-0d521408aa1a no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:42.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1850" for this suite. • [SLOW TEST:8.155 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":2,"skipped":211,"failed":0} Jun 14 17:53:42.253: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:36.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Jun 14 17:53:36.343: INFO: Waiting up to 5m0s for pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143" in namespace "pods-1961" to be "Succeeded or Failed" Jun 14 17:53:36.345: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648808ms Jun 14 17:53:38.348: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005467853s Jun 14 17:53:40.352: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009225466s Jun 14 17:53:42.356: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01274787s Jun 14 17:53:44.359: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01607102s STEP: Saw pod success Jun 14 17:53:44.359: INFO: Pod "pod-always-succeedc113578c-a6d8-4592-8d59-40775877e143" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:46.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1961" for this suite. • [SLOW TEST:10.074 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":1613,"failed":0} Jun 14 17:53:46.385: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop Jun 14 17:53:21.713: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.717: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Jun 14 17:53:49.776: INFO: pod is running [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:49.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4152" for this suite. • [SLOW TEST:28.103 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":179,"failed":0} Jun 14 17:53:49.788: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:33.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 14 17:53:42.250: INFO: start=2021-06-14 17:53:37.095022221 +0000 UTC m=+18.125728948, now=2021-06-14 17:53:42.250926596 +0000 UTC m=+23.281633387, kubelet pod: {"metadata":{"name":"pod-submit-remove-0837cef1-da08-41ab-bb0a-e7b3feab4f4d","namespace":"pods-632","uid":"98964dda-dad1-4704-af0d-226501db0422","resourceVersion":"6335156","creationTimestamp":"2021-06-14T17:53:33Z","deletionTimestamp":"2021-06-14T17:54:07Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"72933846"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.82\"\n ],\n \"mac\": \"1e:55:5e:f4:80:b9\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.82\"\n ],\n \"mac\": \"1e:55:5e:f4:80:b9\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-06-14T17:53:33.086049565Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-06-14T17:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-d95xh","secret":{"secretName":"default-token-d95xh","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-d95xh","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:33Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:38Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:38Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:33Z"}],"hostIP":"172.18.0.5","podIP":"10.244.2.82","podIPs":[{"ip":"10.244.2.82"}],"startTime":"2021-06-14T17:53:33Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Jun 14 17:53:47.113: INFO: start=2021-06-14 17:53:37.095022221 +0000 UTC m=+18.125728948, now=2021-06-14 17:53:47.113930033 +0000 UTC m=+28.144636780, kubelet pod: {"metadata":{"name":"pod-submit-remove-0837cef1-da08-41ab-bb0a-e7b3feab4f4d","namespace":"pods-632","uid":"98964dda-dad1-4704-af0d-226501db0422","resourceVersion":"6335156","creationTimestamp":"2021-06-14T17:53:33Z","deletionTimestamp":"2021-06-14T17:54:07Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"72933846"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.82\"\n ],\n \"mac\": \"1e:55:5e:f4:80:b9\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.82\"\n ],\n \"mac\": \"1e:55:5e:f4:80:b9\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-06-14T17:53:33.086049565Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-06-14T17:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-d95xh","secret":{"secretName":"default-token-d95xh","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-d95xh","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:33Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:38Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:38Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-06-14T17:53:33Z"}],"hostIP":"172.18.0.5","podIP":"10.244.2.82","podIPs":[{"ip":"10.244.2.82"}],"startTime":"2021-06-14T17:53:33Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Jun 14 17:53:52.110: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:52.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-632" for this suite. • [SLOW TEST:19.076 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":2,"skipped":1068,"failed":0} Jun 14 17:53:52.125: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:22.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Jun 14 17:53:42.690: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:42.691: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:42.856: INFO: Exec stderr: "" Jun 14 17:53:42.860: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:42.860: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.003: INFO: Exec stderr: "" Jun 14 17:53:43.006: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.006: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.158: INFO: Exec stderr: "" Jun 14 17:53:43.161: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.161: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.305: INFO: Exec stderr: "" Jun 14 17:53:43.308: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.308: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.465: INFO: Exec stderr: "" Jun 14 17:53:43.468: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.468: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.606: INFO: Exec stderr: "" Jun 14 17:53:43.609: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.609: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:43.956: INFO: Exec stderr: "" Jun 14 17:53:43.959: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:43.959: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.066: INFO: Exec stderr: "" Jun 14 17:53:44.069: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.069: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.163: INFO: Exec stderr: "" Jun 14 17:53:44.167: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.167: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.257: INFO: Exec stderr: "" Jun 14 17:53:44.260: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.260: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.351: INFO: Exec stderr: "" Jun 14 17:53:44.354: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.354: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.488: INFO: Exec stderr: "" Jun 14 17:53:44.492: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.492: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.631: INFO: Exec stderr: "" Jun 14 17:53:44.634: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.635: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.785: INFO: Exec stderr: "" Jun 14 17:53:44.788: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.788: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:44.934: INFO: Exec stderr: "" Jun 14 17:53:44.937: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:44.937: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:45.077: INFO: Exec stderr: "" Jun 14 17:53:45.082: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:45.082: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:45.182: INFO: Exec stderr: "" Jun 14 17:53:45.185: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:45.185: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:45.342: INFO: Exec stderr: "" Jun 14 17:53:45.345: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:45.345: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:45.470: INFO: Exec stderr: "" Jun 14 17:53:45.473: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:45.473: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:45.605: INFO: Exec stderr: "" Jun 14 17:53:51.622: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-5897"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-5897"/host; echo host > "/var/lib/kubelet/mount-propagation-5897"/host/file] Namespace:mount-propagation-5897 PodName:hostexec-leguer-worker2-p5vsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 17:53:51.622: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:51.795: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:51.795: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:51.946: INFO: pod master mount master: stdout: "master", stderr: "" error: Jun 14 17:53:51.949: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:51.949: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.086: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:52.089: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.089: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.218: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:52.221: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.221: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.343: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:52.347: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.347: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.505: INFO: pod master mount host: stdout: "host", stderr: "" error: Jun 14 17:53:52.508: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.508: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.671: INFO: pod slave mount master: stdout: "master", stderr: "" error: Jun 14 17:53:52.675: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.675: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.770: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Jun 14 17:53:52.774: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.774: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:52.910: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:52.913: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:52.913: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.044: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.048: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.048: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.177: INFO: pod slave mount host: stdout: "host", stderr: "" error: Jun 14 17:53:53.180: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.181: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.313: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.317: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.317: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.456: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.460: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.460: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.593: INFO: pod private mount private: stdout: "private", stderr: "" error: Jun 14 17:53:53.596: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.596: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.730: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.733: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.733: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.876: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.880: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.880: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:53.982: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:53.985: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:53.985: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.127: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:54.131: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:54.131: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.271: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:54.276: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:54.276: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.411: INFO: pod default mount default: stdout: "default", stderr: "" error: Jun 14 17:53:54.414: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:54.414: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.517: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Jun 14 17:53:54.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-5897"/master/file` = master] Namespace:mount-propagation-5897 PodName:hostexec-leguer-worker2-p5vsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 17:53:54.517: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.651: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-5897"/slave/file] Namespace:mount-propagation-5897 PodName:hostexec-leguer-worker2-p5vsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 17:53:54.651: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-5897"/host] Namespace:mount-propagation-5897 PodName:hostexec-leguer-worker2-p5vsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 17:53:54.828: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:54.976: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-5897 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:54.976: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:55.098: INFO: Exec stderr: "" Jun 14 17:53:55.126: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-5897 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:55.126: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:55.351: INFO: Exec stderr: "" Jun 14 17:53:55.354: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-5897 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:55.354: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:55.486: INFO: Exec stderr: "" Jun 14 17:53:55.525: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-5897 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 17:53:55.525: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:53:55.745: INFO: Exec stderr: "" Jun 14 17:53:55.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-5897"] Namespace:mount-propagation-5897 PodName:hostexec-leguer-worker2-p5vsf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 17:53:55.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-leguer-worker2-p5vsf in namespace mount-propagation-5897 [AfterEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:55.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-5897" for this suite. • [SLOW TEST:33.417 seconds] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":780,"failed":0} Jun 14 17:53:55.908: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:21.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet Jun 14 17:53:21.659: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:53:21.662: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 in namespace kubelet-9953 I0614 17:53:21.695877 26 runners.go:190] Created replication controller with name: cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04, namespace: kubelet-9953, replica count: 20 Jun 14 17:53:21.805: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:21.832: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Jun 14 17:53:21.839: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:26.942: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:27.041: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:29.436: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0614 17:53:31.746284 26 runners.go:190] cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 Pods: 20 out of 20 created, 12 running, 8 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 14 17:53:32.082: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:32.250: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:34.623: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Jun 14 17:53:37.231: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:37.448: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:39.845: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0614 17:53:41.746778 26 runners.go:190] cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 14 17:53:42.396: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:42.699: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:42.747: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint Jun 14 17:53:42.747: INFO: Checking pods on node leguer-worker via /runningpods endpoint Jun 14 17:53:42.781: INFO: [Resource usage on node "leguer-worker2" is not ready yet, Resource usage on node "leguer-control-plane" is not ready yet, Resource usage on node "leguer-worker" is not ready yet] Jun 14 17:53:42.781: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 in namespace kubelet-9953, will wait for the garbage collector to delete the pods Jun 14 17:53:42.841: INFO: Deleting ReplicationController cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 took: 6.671625ms Jun 14 17:53:44.142: INFO: Terminating ReplicationController cleanup20-b2c1ad42-64fe-4305-b9b3-8416742ecb04 pods took: 1.300403466s Jun 14 17:53:44.970: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Jun 14 17:53:47.561: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:47.890: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:50.090: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Jun 14 17:53:52.747: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:53.108: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:55.239: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Jun 14 17:53:57.908: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Jun 14 17:53:58.296: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Jun 14 17:53:59.142: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint Jun 14 17:53:59.142: INFO: Checking pods on node leguer-worker via /runningpods endpoint Jun 14 17:53:59.159: INFO: Deleting 20 pods on 2 nodes completed in 1.017272858s after the RC was deleted Jun 14 17:53:59.159: INFO: CPU usage of containers on node "leguer-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.766 1.811 1.811 1.811 1.811 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.614 5.111 5.111 5.111 5.111 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 2.011 6.680 6.680 6.680 6.680 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node leguer-worker STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node leguer-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:53:59.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-9953" for this suite. • [SLOW TEST:37.579 seconds] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":125,"failed":0} Jun 14 17:53:59.202: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:53:22.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Jun 14 17:53:27.110: INFO: watch delete seen for pod-submit-status-2-0 Jun 14 17:53:27.110: INFO: Pod pod-submit-status-2-0 on node leguer-worker2 timings total=4.940699614s t=1.145s run=0s execute=0s Jun 14 17:53:27.711: INFO: watch delete seen for pod-submit-status-0-0 Jun 14 17:53:27.711: INFO: Pod pod-submit-status-0-0 on node leguer-worker2 timings total=5.54163656s t=1.661s run=0s execute=0s Jun 14 17:53:28.082: INFO: watch delete seen for pod-submit-status-1-0 Jun 14 17:53:28.083: INFO: Pod pod-submit-status-1-0 on node leguer-worker timings total=5.912992864s t=929ms run=0s execute=0s Jun 14 17:53:32.280: INFO: watch delete seen for pod-submit-status-2-1 Jun 14 17:53:32.280: INFO: Pod pod-submit-status-2-1 on node leguer-worker timings total=5.169814715s t=1.967s run=0s execute=0s Jun 14 17:53:34.081: INFO: watch delete seen for pod-submit-status-0-1 Jun 14 17:53:34.081: INFO: Pod pod-submit-status-0-1 on node leguer-worker timings total=6.370107817s t=1.493s run=0s execute=0s Jun 14 17:53:37.080: INFO: watch delete seen for pod-submit-status-1-1 Jun 14 17:53:37.080: INFO: Pod pod-submit-status-1-1 on node leguer-worker timings total=8.997570583s t=950ms run=0s execute=0s Jun 14 17:53:38.684: INFO: watch delete seen for pod-submit-status-2-2 Jun 14 17:53:38.684: INFO: Pod pod-submit-status-2-2 on node leguer-worker timings total=6.403938152s t=1.527s run=1s execute=0s Jun 14 17:53:40.532: INFO: watch delete seen for pod-submit-status-0-2 Jun 14 17:53:40.532: INFO: Pod pod-submit-status-0-2 on node leguer-worker timings total=6.450997171s t=1.543s run=0s execute=0s Jun 14 17:53:41.524: INFO: watch delete seen for pod-submit-status-1-2 Jun 14 17:53:41.524: INFO: Pod pod-submit-status-1-2 on node leguer-worker timings total=4.443435921s t=1.102s run=0s execute=0s Jun 14 17:53:43.283: INFO: watch delete seen for pod-submit-status-2-3 Jun 14 17:53:43.283: INFO: Pod pod-submit-status-2-3 on node leguer-worker timings total=4.59909772s t=1.808s run=0s execute=0s Jun 14 17:53:45.085: INFO: watch delete seen for pod-submit-status-0-3 Jun 14 17:53:45.085: INFO: Pod pod-submit-status-0-3 on node leguer-worker timings total=4.552835254s t=807ms run=0s execute=0s Jun 14 17:53:46.482: INFO: watch delete seen for pod-submit-status-1-3 Jun 14 17:53:46.482: INFO: Pod pod-submit-status-1-3 on node leguer-worker timings total=4.958506527s t=820ms run=0s execute=0s Jun 14 17:53:48.884: INFO: watch delete seen for pod-submit-status-2-4 Jun 14 17:53:48.884: INFO: Pod pod-submit-status-2-4 on node leguer-worker timings total=5.600398377s t=417ms run=0s execute=0s Jun 14 17:53:52.683: INFO: watch delete seen for pod-submit-status-2-5 Jun 14 17:53:52.683: INFO: Pod pod-submit-status-2-5 on node leguer-worker timings total=3.799572693s t=1.766s run=0s execute=0s Jun 14 17:53:53.884: INFO: watch delete seen for pod-submit-status-1-4 Jun 14 17:53:53.884: INFO: Pod pod-submit-status-1-4 on node leguer-worker timings total=7.401468403s t=1.164s run=1s execute=0s Jun 14 17:53:56.332: INFO: watch delete seen for pod-submit-status-0-4 Jun 14 17:53:56.332: INFO: Pod pod-submit-status-0-4 on node leguer-worker timings total=11.247020634s t=923ms run=0s execute=0s Jun 14 17:53:58.682: INFO: watch delete seen for pod-submit-status-1-5 Jun 14 17:53:58.682: INFO: Pod pod-submit-status-1-5 on node leguer-worker timings total=4.79786405s t=1.132s run=1s execute=0s Jun 14 17:53:59.283: INFO: watch delete seen for pod-submit-status-2-6 Jun 14 17:53:59.283: INFO: Pod pod-submit-status-2-6 on node leguer-worker timings total=6.599125281s t=1.359s run=1s execute=0s Jun 14 17:54:01.483: INFO: watch delete seen for pod-submit-status-0-5 Jun 14 17:54:01.483: INFO: Pod pod-submit-status-0-5 on node leguer-worker timings total=5.150422639s t=400ms run=0s execute=0s Jun 14 17:54:02.683: INFO: watch delete seen for pod-submit-status-1-6 Jun 14 17:54:02.683: INFO: Pod pod-submit-status-1-6 on node leguer-worker timings total=4.00106211s t=1.634s run=0s execute=0s Jun 14 17:54:04.684: INFO: watch delete seen for pod-submit-status-0-6 Jun 14 17:54:04.684: INFO: Pod pod-submit-status-0-6 on node leguer-worker timings total=3.201002457s t=128ms run=0s execute=0s Jun 14 17:54:06.524: INFO: watch delete seen for pod-submit-status-1-7 Jun 14 17:54:06.524: INFO: Pod pod-submit-status-1-7 on node leguer-worker timings total=3.841366954s t=1.555s run=0s execute=0s Jun 14 17:54:08.330: INFO: watch delete seen for pod-submit-status-0-7 Jun 14 17:54:08.331: INFO: Pod pod-submit-status-0-7 on node leguer-worker timings total=3.646433963s t=1.307s run=0s execute=0s Jun 14 17:54:09.283: INFO: watch delete seen for pod-submit-status-2-7 Jun 14 17:54:09.283: INFO: Pod pod-submit-status-2-7 on node leguer-worker timings total=10.000460856s t=1.279s run=1s execute=0s Jun 14 17:54:12.284: INFO: watch delete seen for pod-submit-status-1-8 Jun 14 17:54:12.284: INFO: Pod pod-submit-status-1-8 on node leguer-worker timings total=5.759114268s t=385ms run=0s execute=0s Jun 14 17:54:12.727: INFO: watch delete seen for pod-submit-status-0-8 Jun 14 17:54:12.727: INFO: Pod pod-submit-status-0-8 on node leguer-worker timings total=4.396570586s t=742ms run=1s execute=0s Jun 14 17:54:16.284: INFO: watch delete seen for pod-submit-status-1-9 Jun 14 17:54:16.284: INFO: Pod pod-submit-status-1-9 on node leguer-worker timings total=3.999952549s t=975ms run=1s execute=0s Jun 14 17:54:16.682: INFO: watch delete seen for pod-submit-status-0-9 Jun 14 17:54:16.683: INFO: Pod pod-submit-status-0-9 on node leguer-worker timings total=3.955265596s t=570ms run=0s execute=0s Jun 14 17:54:18.283: INFO: watch delete seen for pod-submit-status-1-10 Jun 14 17:54:18.284: INFO: Pod pod-submit-status-1-10 on node leguer-worker timings total=1.999767979s t=317ms run=0s execute=0s Jun 14 17:54:19.283: INFO: watch delete seen for pod-submit-status-2-8 Jun 14 17:54:19.283: INFO: Pod pod-submit-status-2-8 on node leguer-worker timings total=9.999539418s t=1.923s run=1s execute=0s Jun 14 17:54:22.283: INFO: watch delete seen for pod-submit-status-2-9 Jun 14 17:54:22.284: INFO: Pod pod-submit-status-2-9 on node leguer-worker timings total=3.000738945s t=659ms run=0s execute=0s Jun 14 17:54:23.083: INFO: watch delete seen for pod-submit-status-0-10 Jun 14 17:54:23.084: INFO: Pod pod-submit-status-0-10 on node leguer-worker timings total=6.40082842s t=162ms run=0s execute=0s Jun 14 17:54:26.083: INFO: watch delete seen for pod-submit-status-2-10 Jun 14 17:54:26.083: INFO: Pod pod-submit-status-2-10 on node leguer-worker timings total=3.799625734s t=1.81s run=1s execute=0s Jun 14 17:54:27.083: INFO: watch delete seen for pod-submit-status-0-11 Jun 14 17:54:27.084: INFO: Pod pod-submit-status-0-11 on node leguer-worker timings total=4.000050891s t=680ms run=0s execute=0s Jun 14 17:54:28.884: INFO: watch delete seen for pod-submit-status-1-11 Jun 14 17:54:28.884: INFO: Pod pod-submit-status-1-11 on node leguer-worker timings total=10.600157408s t=552ms run=1s execute=0s Jun 14 17:54:29.484: INFO: watch delete seen for pod-submit-status-2-11 Jun 14 17:54:29.484: INFO: Pod pod-submit-status-2-11 on node leguer-worker timings total=3.400646406s t=887ms run=0s execute=0s Jun 14 17:54:30.484: INFO: watch delete seen for pod-submit-status-0-12 Jun 14 17:54:30.484: INFO: Pod pod-submit-status-0-12 on node leguer-worker timings total=3.400358134s t=1.352s run=0s execute=0s Jun 14 17:54:32.284: INFO: watch delete seen for pod-submit-status-1-12 Jun 14 17:54:32.284: INFO: Pod pod-submit-status-1-12 on node leguer-worker timings total=3.399680602s t=1.301s run=0s execute=0s Jun 14 17:54:33.083: INFO: watch delete seen for pod-submit-status-0-13 Jun 14 17:54:33.083: INFO: Pod pod-submit-status-0-13 on node leguer-worker timings total=2.598489871s t=903ms run=0s execute=0s Jun 14 17:54:34.283: INFO: watch delete seen for pod-submit-status-2-12 Jun 14 17:54:34.283: INFO: Pod pod-submit-status-2-12 on node leguer-worker timings total=4.79923225s t=1.905s run=1s execute=0s Jun 14 17:54:36.531: INFO: watch delete seen for pod-submit-status-1-13 Jun 14 17:54:36.531: INFO: Pod pod-submit-status-1-13 on node leguer-worker timings total=4.247004222s t=268ms run=0s execute=0s Jun 14 17:54:37.884: INFO: watch delete seen for pod-submit-status-0-14 Jun 14 17:54:37.884: INFO: Pod pod-submit-status-0-14 on node leguer-worker timings total=4.801176005s t=108ms run=0s execute=0s Jun 14 17:54:39.083: INFO: watch delete seen for pod-submit-status-2-13 Jun 14 17:54:39.083: INFO: Pod pod-submit-status-2-13 on node leguer-worker timings total=4.799332969s t=1.101s run=1s execute=0s Jun 14 17:54:39.683: INFO: watch delete seen for pod-submit-status-1-14 Jun 14 17:54:39.684: INFO: Pod pod-submit-status-1-14 on node leguer-worker timings total=3.152707005s t=800ms run=1s execute=0s Jun 14 17:54:47.906: INFO: watch delete seen for pod-submit-status-2-14 Jun 14 17:54:47.906: INFO: Pod pod-submit-status-2-14 on node leguer-worker timings total=8.823321622s t=1.046s run=1s execute=0s [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:54:47.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6643" for this suite. • [SLOW TEST:85.784 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container","total":-1,"completed":2,"skipped":531,"failed":0} Jun 14 17:54:47.920: INFO: Running AfterSuite actions on all nodes Jun 14 17:54:47.921: INFO: Running AfterSuite actions on node 1 Jun 14 17:54:47.921: INFO: Skipping dumping logs from cluster Ran 17 of 5668 Specs in 86.786 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5651 Skipped Ginkgo ran 1 suite in 1m28.984558623s Test Suite Passed