Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621887295 - Will randomize all specs Will run 5667 specs Running in parallel across 10 nodes May 24 20:14:57.162: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.166: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 20:14:57.232: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:14:57.475: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:14:57.476: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 20:14:57.476: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 20:14:57.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 24 20:14:57.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 20:14:57.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 24 20:14:57.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 20:14:57.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 24 20:14:57.485: INFO: e2e test version: v1.20.6 May 24 20:14:57.486: INFO: kube-apiserver version: v1.20.7 May 24 20:14:57.487: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.492: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ May 24 20:14:57.494: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.511: INFO: Cluster IP family: ipv4 S ------------------------------ May 24 20:14:57.495: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.512: INFO: Cluster IP family: ipv4 S ------------------------------ May 24 20:14:57.492: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.513: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 24 20:14:57.503: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.724: INFO: Cluster IP family: ipv4 May 24 20:14:57.516: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.724: INFO: Cluster IP family: ipv4 May 24 20:14:57.532: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.725: INFO: Cluster IP family: ipv4 May 24 20:14:57.514: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.725: INFO: Cluster IP family: ipv4 May 24 20:14:57.517: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.725: INFO: Cluster IP family: ipv4 May 24 20:14:57.513: INFO: >>> kubeConfig: /root/.kube/config May 24 20:14:57.725: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor May 24 20:14:58.053: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.056: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 24 20:14:58.058: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:14:58.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-9282" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.115 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor May 24 20:14:58.452: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.455: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 24 20:14:58.457: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:14:58.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3556" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.336 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 24 20:14:58.449: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.453: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:140 STEP: Creating ConfigMap configmap-3199/configmap-test-e1ccc952-7452-4e8a-a9d1-59bea98c2551 STEP: Updating configMap configmap-3199/configmap-test-e1ccc952-7452-4e8a-a9d1-59bea98c2551 STEP: Verifying update of ConfigMap configmap-3199/configmap-test-e1ccc952-7452-4e8a-a9d1-59bea98c2551 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:14:58.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3199" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 May 24 20:14:59.642: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:14:59.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-27" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.908 seconds] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:00.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 May 24 20:15:01.137: INFO: Only supported for providers [gce gke aws local] (not skeleton) [AfterEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-6508" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.853 seconds] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 Only supported for providers [gce gke aws local] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:38 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context May 24 20:14:58.049: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.053: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups May 24 20:14:58.060: INFO: Waiting up to 5m0s for pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f" in namespace "security-context-7505" to be "Succeeded or Failed" May 24 20:14:58.067: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487627ms May 24 20:15:00.130: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069356351s May 24 20:15:02.223: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162464972s May 24 20:15:04.721: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66073548s May 24 20:15:06.730: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Running", Reason="", readiness=true. Elapsed: 8.669737317s May 24 20:15:08.832: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.771827131s STEP: Saw pod success May 24 20:15:08.832: INFO: Pod "security-context-bd52a240-76e8-49a7-9514-fbb692fe203f" satisfied condition "Succeeded or Failed" May 24 20:15:09.222: INFO: Trying to get logs from node leguer-worker2 pod security-context-bd52a240-76e8-49a7-9514-fbb692fe203f container test-container: STEP: delete the pod May 24 20:15:10.424: INFO: Waiting for pod security-context-bd52a240-76e8-49a7-9514-fbb692fe203f to disappear May 24 20:15:10.628: INFO: Pod security-context-bd52a240-76e8-49a7-9514-fbb692fe203f no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:10.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7505" for this suite. • [SLOW TEST:12.983 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 24 20:14:58.458: INFO: Waiting up to 5m0s for pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98" in namespace "security-context-6523" to be "Succeeded or Failed" May 24 20:14:58.463: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.887763ms May 24 20:15:00.636: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177947759s May 24 20:15:02.926: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468058704s May 24 20:15:04.932: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Running", Reason="", readiness=true. Elapsed: 6.474691327s May 24 20:15:07.029: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Running", Reason="", readiness=true. Elapsed: 8.571544579s May 24 20:15:09.222: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Running", Reason="", readiness=true. Elapsed: 10.764091248s May 24 20:15:11.237: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.779584138s STEP: Saw pod success May 24 20:15:11.237: INFO: Pod "security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98" satisfied condition "Succeeded or Failed" May 24 20:15:11.242: INFO: Trying to get logs from node leguer-worker2 pod security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98 container test-container: STEP: delete the pod May 24 20:15:11.440: INFO: Waiting for pod security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98 to disappear May 24 20:15:11.443: INFO: Pod security-context-7c8d86bf-b49c-4479-8e9d-98698b5afe98 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:11.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6523" for this suite. • [SLOW TEST:13.087 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes May 24 20:14:59.654: INFO: Waiting up to 5m0s for pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d" in namespace "pods-7111" to be "Succeeded or Failed" May 24 20:14:59.656: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340245ms May 24 20:15:01.822: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168616303s May 24 20:15:03.828: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174047434s May 24 20:15:05.837: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183617792s May 24 20:15:07.842: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188188286s May 24 20:15:10.135: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.481157822s STEP: Saw pod success May 24 20:15:10.135: INFO: Pod "pod-always-succeedd4b0f694-2fdb-449f-a18b-120c2d7a636d" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:12.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7111" for this suite. • [SLOW TEST:14.395 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":1,"skipped":362,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context May 24 20:14:59.642: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:59.650: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 24 20:14:59.658: INFO: Waiting up to 5m0s for pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6" in namespace "security-context-5461" to be "Succeeded or Failed" May 24 20:14:59.660: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.590599ms May 24 20:15:01.822: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164050459s May 24 20:15:03.828: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169393475s May 24 20:15:05.837: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17893786s May 24 20:15:07.842: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183910559s May 24 20:15:10.135: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476595263s May 24 20:15:12.141: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.482957564s STEP: Saw pod success May 24 20:15:12.141: INFO: Pod "security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6" satisfied condition "Succeeded or Failed" May 24 20:15:12.146: INFO: Trying to get logs from node leguer-worker2 pod security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6 container test-container: STEP: delete the pod May 24 20:15:13.131: INFO: Waiting for pod security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6 to disappear May 24 20:15:13.136: INFO: Pod security-context-64fdc3f9-2fdc-4598-85bd-cdcb024b36f6 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:13.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5461" for this suite. • [SLOW TEST:14.595 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:01.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 STEP: Creating a pod to test downward api env vars May 24 20:15:03.331: INFO: Waiting up to 5m0s for pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109" in namespace "downward-api-861" to be "Succeeded or Failed" May 24 20:15:03.438: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Pending", Reason="", readiness=false. Elapsed: 106.882263ms May 24 20:15:05.542: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210099173s May 24 20:15:07.544: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212878788s May 24 20:15:09.830: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498885003s May 24 20:15:11.927: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595242427s May 24 20:15:14.123: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.791163608s STEP: Saw pod success May 24 20:15:14.123: INFO: Pod "downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109" satisfied condition "Succeeded or Failed" May 24 20:15:14.126: INFO: Trying to get logs from node leguer-worker pod downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109 container dapi-container: STEP: delete the pod May 24 20:15:14.722: INFO: Waiting for pod downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109 to disappear May 24 20:15:14.931: INFO: Pod downward-api-84a6cbd1-604a-437e-9f65-d898f1d29109 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-861" for this suite. • [SLOW TEST:13.760 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":2,"skipped":930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context May 24 20:14:58.044: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.047: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 24 20:14:58.055: INFO: Waiting up to 5m0s for pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f" in namespace "security-context-9557" to be "Succeeded or Failed" May 24 20:14:58.057: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.996205ms May 24 20:15:00.130: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075899943s May 24 20:15:02.223: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16818604s May 24 20:15:04.721: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666788774s May 24 20:15:06.730: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.675705502s May 24 20:15:08.832: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Running", Reason="", readiness=true. Elapsed: 10.777834694s May 24 20:15:10.930: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Running", Reason="", readiness=true. Elapsed: 12.875353405s May 24 20:15:13.131: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Running", Reason="", readiness=true. Elapsed: 15.076341772s May 24 20:15:15.138: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.08312657s STEP: Saw pod success May 24 20:15:15.138: INFO: Pod "security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f" satisfied condition "Succeeded or Failed" May 24 20:15:15.331: INFO: Trying to get logs from node leguer-worker pod security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f container test-container: STEP: delete the pod May 24 20:15:15.460: INFO: Waiting for pod security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f to disappear May 24 20:15:15.464: INFO: Pod security-context-4a9b4f8e-a5c7-4b06-8e32-2cbc6844224f no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:15.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9557" for this suite. • [SLOW TEST:17.803 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:15.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:51 May 24 20:15:15.515: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:15.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-7457" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:59 No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:11.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 24 20:15:12.146: INFO: Waiting up to 5m0s for pod "security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5" in namespace "security-context-1351" to be "Succeeded or Failed" May 24 20:15:12.152: INFO: Pod "security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.867824ms May 24 20:15:14.239: INFO: Pod "security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092384558s May 24 20:15:16.431: INFO: Pod "security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284422063s STEP: Saw pod success May 24 20:15:16.431: INFO: Pod "security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5" satisfied condition "Succeeded or Failed" May 24 20:15:16.435: INFO: Trying to get logs from node leguer-worker2 pod security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5 container test-container: STEP: delete the pod May 24 20:15:17.422: INFO: Waiting for pod security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5 to disappear May 24 20:15:17.535: INFO: Pod security-context-688208bf-55cd-4959-9c40-6d32b8f6cca5 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:17.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1351" for this suite. • [SLOW TEST:5.800 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 24 20:15:18.465: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 24 20:14:58.055: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.058: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 24 20:15:14.653: INFO: start=2021-05-24 20:15:09.42578912 +0000 UTC m=+14.323215817, now=2021-05-24 20:15:14.65392788 +0000 UTC m=+19.551354516, kubelet pod: {"metadata":{"name":"pod-submit-remove-f269aa10-79d5-41d5-8a1f-cf62a90d72cc","namespace":"pods-471","uid":"c5dcdf9c-0efc-4a70-b01a-386929548c87","resourceVersion":"887667","creationTimestamp":"2021-05-24T20:14:58Z","deletionTimestamp":"2021-05-24T20:15:39Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"60471578"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.189\"\n ],\n \"mac\": \"4a:e6:35:b7:04:d2\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.189\"\n ],\n \"mac\": \"4a:e6:35:b7:04:d2\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-05-24T20:14:58.075650813Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-05-24T20:14:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-gmwrv","secret":{"secretName":"default-token-gmwrv","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-gmwrv","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-24T20:14:58Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-24T20:15:04Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-24T20:15:04Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-24T20:14:58Z"}],"hostIP":"172.18.0.7","podIP":"10.244.1.189","podIPs":[{"ip":"10.244.1.189"}],"startTime":"2021-05-24T20:14:58Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-05-24T20:15:04Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a","containerID":"containerd://e9c36bfaecbba9fa273b44c3d09de41385a6218d756a511b99068e79f8df98dd","started":true}],"qosClass":"BestEffort"}} May 24 20:15:19.440: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:19.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-471" for this suite. • [SLOW TEST:21.517 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":1,"skipped":80,"failed":0} May 24 20:15:19.452: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:15.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 24 20:15:15.673: INFO: Waiting up to 5m0s for pod "security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d" in namespace "security-context-1506" to be "Succeeded or Failed" May 24 20:15:15.675: INFO: Pod "security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342504ms May 24 20:15:17.678: INFO: Pod "security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005466379s May 24 20:15:19.732: INFO: Pod "security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058636434s STEP: Saw pod success May 24 20:15:19.732: INFO: Pod "security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d" satisfied condition "Succeeded or Failed" May 24 20:15:19.740: INFO: Trying to get logs from node leguer-worker2 pod security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d container test-container: STEP: delete the pod May 24 20:15:19.756: INFO: Waiting for pod security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d to disappear May 24 20:15:19.759: INFO: Pod security-context-2b0b2b70-9f8b-49a1-a119-1fe6bc8d634d no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:19.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1506" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":209,"failed":0} May 24 20:15:19.768: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:13.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 24 20:15:14.323: INFO: Waiting up to 5m0s for pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8" in namespace "security-context-9354" to be "Succeeded or Failed" May 24 20:15:14.628: INFO: Pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 305.436286ms May 24 20:15:16.824: INFO: Pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501050781s May 24 20:15:18.830: INFO: Pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507222506s May 24 20:15:20.835: INFO: Pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.512068744s STEP: Saw pod success May 24 20:15:20.835: INFO: Pod "security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8" satisfied condition "Succeeded or Failed" May 24 20:15:20.840: INFO: Trying to get logs from node leguer-worker pod security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8 container test-container: STEP: delete the pod May 24 20:15:21.335: INFO: Waiting for pod security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8 to disappear May 24 20:15:21.342: INFO: Pod security-context-9776c7b0-7530-4568-a1da-9dbae2d51fe8 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:21.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9354" for this suite. • [SLOW TEST:8.041 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":2,"skipped":685,"failed":0} May 24 20:15:21.358: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:10.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 24 20:15:10.820: INFO: Waiting up to 5m0s for pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039" in namespace "security-context-8146" to be "Succeeded or Failed" May 24 20:15:10.823: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.800788ms May 24 20:15:13.131: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311337395s May 24 20:15:15.329: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508424058s May 24 20:15:17.430: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610054487s May 24 20:15:19.433: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61276494s May 24 20:15:21.530: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.709845152s STEP: Saw pod success May 24 20:15:21.530: INFO: Pod "security-context-8636524e-060c-4bc4-ba67-134fee78a039" satisfied condition "Succeeded or Failed" May 24 20:15:21.541: INFO: Trying to get logs from node leguer-worker pod security-context-8636524e-060c-4bc4-ba67-134fee78a039 container test-container: STEP: delete the pod May 24 20:15:21.740: INFO: Waiting for pod security-context-8636524e-060c-4bc4-ba67-134fee78a039 to disappear May 24 20:15:21.743: INFO: Pod security-context-8636524e-060c-4bc4-ba67-134fee78a039 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:21.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8146" for this suite. • [SLOW TEST:10.963 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:14.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 24 20:15:15.457: INFO: Waiting up to 5m0s for pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297" in namespace "security-context-7972" to be "Succeeded or Failed" May 24 20:15:15.461: INFO: Pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006758ms May 24 20:15:17.535: INFO: Pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078291678s May 24 20:15:19.538: INFO: Pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081296529s May 24 20:15:21.541: INFO: Pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084295109s STEP: Saw pod success May 24 20:15:21.542: INFO: Pod "security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297" satisfied condition "Succeeded or Failed" May 24 20:15:21.548: INFO: Trying to get logs from node leguer-worker pod security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297 container test-container: STEP: delete the pod May 24 20:15:21.733: INFO: Waiting for pod security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297 to disappear May 24 20:15:21.743: INFO: Pod security-context-d5d028dc-ed7b-43d5-9bcd-aa9982fdf297 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:21.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7972" for this suite. • [SLOW TEST:6.815 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":2,"skipped":1285,"failed":0} May 24 20:15:21.765: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:58.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet May 24 20:14:58.472: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.476: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 in namespace kubelet-4125 I0524 20:14:58.504850 30 runners.go:190] Created replication controller with name: cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965, namespace: kubelet-4125, replica count: 20 May 24 20:14:58.589: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:14:58.652: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:14:58.929: INFO: Missing info/stats for container "runtime" on node "leguer-worker" May 24 20:15:03.871: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:04.145: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:15:04.416: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0524 20:15:08.555216 30 runners.go:190] cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 Pods: 20 out of 20 created, 9 running, 11 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 20:15:09.360: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:09.594: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:15:09.988: INFO: Missing info/stats for container "runtime" on node "leguer-worker" May 24 20:15:14.757: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:15:14.758: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:15.270: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0524 20:15:18.555528 30 runners.go:190] cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 20:15:19.555: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint May 24 20:15:19.555: INFO: Checking pods on node leguer-worker via /runningpods endpoint May 24 20:15:19.582: INFO: [Resource usage on node "leguer-worker" is not ready yet, Resource usage on node "leguer-worker2" is not ready yet, Resource usage on node "leguer-control-plane" is not ready yet] May 24 20:15:19.582: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 in namespace kubelet-4125, will wait for the garbage collector to delete the pods May 24 20:15:19.642: INFO: Deleting ReplicationController cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 took: 7.030755ms May 24 20:15:19.890: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:20.197: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:15:20.737: INFO: Missing info/stats for container "runtime" on node "leguer-worker" May 24 20:15:20.842: INFO: Terminating ReplicationController cleanup20-cdbe47b5-4ad4-4fce-a3db-71e4e0c7b965 pods took: 1.200365783s May 24 20:15:25.075: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:25.288: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" May 24 20:15:25.850: INFO: Missing info/stats for container "runtime" on node "leguer-worker" May 24 20:15:30.226: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" May 24 20:15:31.329: INFO: Missing info/stats for container "runtime" on node "leguer-worker" May 24 20:15:33.943: INFO: Checking pods on node leguer-worker via /runningpods endpoint May 24 20:15:33.943: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint May 24 20:15:34.151: INFO: Deleting 20 pods on 2 nodes completed in 1.208356251s after the RC was deleted May 24 20:15:34.151: INFO: CPU usage of containers on node "leguer-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 2.820 3.385 3.385 3.385 3.385 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.700 2.118 2.118 2.118 2.118 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 5.018 5.097 5.097 5.097 5.097 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node leguer-worker May 24 20:15:34.458: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node leguer-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-4125" for this suite. • [SLOW TEST:36.113 seconds] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":484,"failed":0} May 24 20:15:34.560: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:15:15.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination May 24 20:15:38.956: INFO: pod is running [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-994" for this suite. • [SLOW TEST:23.605 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":3,"skipped":1262,"failed":0} May 24 20:15:39.425: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation May 24 20:14:58.051: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.054: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 May 24 20:15:33.636: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:33.636: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:33.734: INFO: Exec stderr: "" May 24 20:15:33.738: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:33.738: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:33.855: INFO: Exec stderr: "" May 24 20:15:33.934: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:33.934: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:34.536: INFO: Exec stderr: "" May 24 20:15:34.539: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:34.539: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:34.630: INFO: Exec stderr: "" May 24 20:15:34.633: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:34.633: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:34.765: INFO: Exec stderr: "" May 24 20:15:34.768: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:34.768: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:34.907: INFO: Exec stderr: "" May 24 20:15:34.911: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:34.911: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:35.046: INFO: Exec stderr: "" May 24 20:15:35.048: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:35.048: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:35.177: INFO: Exec stderr: "" May 24 20:15:35.231: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:35.231: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:35.649: INFO: Exec stderr: "" May 24 20:15:35.657: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:35.657: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:35.790: INFO: Exec stderr: "" May 24 20:15:35.793: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:35.793: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:35.914: INFO: Exec stderr: "" May 24 20:15:35.916: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:35.916: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:36.005: INFO: Exec stderr: "" May 24 20:15:36.008: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:36.008: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:36.145: INFO: Exec stderr: "" May 24 20:15:36.148: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:36.148: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:36.261: INFO: Exec stderr: "" May 24 20:15:36.331: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:36.331: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:36.639: INFO: Exec stderr: "" May 24 20:15:36.736: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:36.737: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:36.867: INFO: Exec stderr: "" May 24 20:15:36.929: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:36.929: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:37.074: INFO: Exec stderr: "" May 24 20:15:37.132: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:37.133: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:37.452: INFO: Exec stderr: "" May 24 20:15:37.527: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:37.528: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:37.629: INFO: Exec stderr: "" May 24 20:15:37.632: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:37.632: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:37.748: INFO: Exec stderr: "" May 24 20:15:39.763: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-8316"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-8316"/host; echo host > "/var/lib/kubelet/mount-propagation-8316"/host/file] Namespace:mount-propagation-8316 PodName:hostexec-leguer-worker-jqbbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:15:39.763: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:39.926: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:39.926: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.157: INFO: pod master mount master: stdout: "master", stderr: "" error: May 24 20:15:40.159: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.159: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.292: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:40.296: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.296: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.412: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:40.429: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.429: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.643: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:40.646: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.646: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.783: INFO: pod master mount host: stdout: "host", stderr: "" error: May 24 20:15:40.785: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.786: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:40.919: INFO: pod slave mount master: stdout: "master", stderr: "" error: May 24 20:15:40.923: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:40.923: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.073: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: May 24 20:15:41.129: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.129: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.254: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:41.258: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.258: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.402: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:41.405: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.405: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.529: INFO: pod slave mount host: stdout: "host", stderr: "" error: May 24 20:15:41.534: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.534: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.671: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:41.674: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.674: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:41.814: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:41.817: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:41.817: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.076: INFO: pod private mount private: stdout: "private", stderr: "" error: May 24 20:15:42.079: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.079: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.219: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:42.235: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.235: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.358: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:42.429: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.430: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.565: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:42.568: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.568: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.709: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:42.712: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.712: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.844: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:42.850: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:42.850: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:42.950: INFO: pod default mount default: stdout: "default", stderr: "" error: May 24 20:15:43.029: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:43.029: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:43.229: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 24 20:15:43.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-8316"/master/file` = master] Namespace:mount-propagation-8316 PodName:hostexec-leguer-worker-jqbbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:15:43.229: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:43.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-8316"/slave/file] Namespace:mount-propagation-8316 PodName:hostexec-leguer-worker-jqbbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:15:43.357: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:43.662: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-8316"/host] Namespace:mount-propagation-8316 PodName:hostexec-leguer-worker-jqbbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:15:43.663: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:43.805: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-8316 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:43.806: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:43.931: INFO: Exec stderr: "" May 24 20:15:44.133: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-8316 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:44.133: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:44.346: INFO: Exec stderr: "" May 24 20:15:44.349: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-8316 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:44.349: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:44.485: INFO: Exec stderr: "" May 24 20:15:44.487: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-8316 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 20:15:44.487: INFO: >>> kubeConfig: /root/.kube/config May 24 20:15:44.622: INFO: Exec stderr: "" May 24 20:15:44.622: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-8316"] Namespace:mount-propagation-8316 PodName:hostexec-leguer-worker-jqbbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:15:44.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-leguer-worker-jqbbv in namespace mount-propagation-8316 [AfterEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:15:44.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-8316" for this suite. • [SLOW TEST:46.980 seconds] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":290,"failed":0} May 24 20:15:44.978: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:14:57.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods May 24 20:14:58.044: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:14:58.047: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay May 24 20:15:02.223: INFO: watch delete seen for pod-submit-status-0-0 May 24 20:15:02.619: INFO: Pod pod-submit-status-0-0 on node leguer-worker timings total=4.569644993s t=600ms run=0s execute=0s May 24 20:15:06.144: INFO: watch delete seen for pod-submit-status-1-0 May 24 20:15:06.144: INFO: Pod pod-submit-status-1-0 on node leguer-worker timings total=8.09484111s t=1.413s run=0s execute=0s May 24 20:15:06.631: INFO: watch delete seen for pod-submit-status-0-1 May 24 20:15:06.631: INFO: Pod pod-submit-status-0-1 on node leguer-worker2 timings total=4.011853439s t=1.343s run=0s execute=0s May 24 20:15:14.331: INFO: watch delete seen for pod-submit-status-2-0 May 24 20:15:14.419: INFO: Pod pod-submit-status-2-0 on node leguer-worker timings total=16.369881542s t=1.455s run=0s execute=0s May 24 20:15:16.430: INFO: watch delete seen for pod-submit-status-2-1 May 24 20:15:16.430: INFO: Pod pod-submit-status-2-1 on node leguer-worker2 timings total=2.010498233s t=521ms run=0s execute=0s May 24 20:15:17.122: INFO: watch delete seen for pod-submit-status-1-1 May 24 20:15:17.122: INFO: Pod pod-submit-status-1-1 on node leguer-worker timings total=10.977363904s t=1.74s run=1s execute=0s May 24 20:15:18.230: INFO: watch delete seen for pod-submit-status-0-2 May 24 20:15:18.231: INFO: Pod pod-submit-status-0-2 on node leguer-worker2 timings total=11.600219972s t=789ms run=0s execute=0s May 24 20:15:20.266: INFO: watch delete seen for pod-submit-status-1-2 May 24 20:15:20.266: INFO: Pod pod-submit-status-1-2 on node leguer-worker timings total=3.144105145s t=675ms run=0s execute=0s May 24 20:15:22.548: INFO: watch delete seen for pod-submit-status-0-3 May 24 20:15:22.548: INFO: Pod pod-submit-status-0-3 on node leguer-worker2 timings total=4.316381158s t=955ms run=1s execute=0s May 24 20:15:22.866: INFO: watch delete seen for pod-submit-status-1-3 May 24 20:15:22.866: INFO: Pod pod-submit-status-1-3 on node leguer-worker timings total=2.599988556s t=1.053s run=0s execute=0s May 24 20:15:29.145: INFO: watch delete seen for pod-submit-status-0-4 May 24 20:15:29.145: INFO: Pod pod-submit-status-0-4 on node leguer-worker2 timings total=6.59687204s t=144ms run=0s execute=0s May 24 20:15:29.267: INFO: watch delete seen for pod-submit-status-1-4 May 24 20:15:29.267: INFO: Pod pod-submit-status-1-4 on node leguer-worker timings total=6.401351219s t=1.019s run=0s execute=0s May 24 20:15:29.636: INFO: watch delete seen for pod-submit-status-2-2 May 24 20:15:29.636: INFO: Pod pod-submit-status-2-2 on node leguer-worker2 timings total=13.206345051s t=1.075s run=0s execute=0s May 24 20:15:31.942: INFO: watch delete seen for pod-submit-status-1-5 May 24 20:15:31.942: INFO: Pod pod-submit-status-1-5 on node leguer-worker2 timings total=2.674590708s t=253ms run=0s execute=0s May 24 20:15:33.225: INFO: watch delete seen for pod-submit-status-2-3 May 24 20:15:33.225: INFO: Pod pod-submit-status-2-3 on node leguer-worker2 timings total=3.588588679s t=255ms run=0s execute=0s May 24 20:15:34.544: INFO: watch delete seen for pod-submit-status-0-5 May 24 20:15:34.545: INFO: Pod pod-submit-status-0-5 on node leguer-worker2 timings total=5.399726537s t=1.837s run=1s execute=0s May 24 20:15:36.544: INFO: watch delete seen for pod-submit-status-2-4 May 24 20:15:36.544: INFO: Pod pod-submit-status-2-4 on node leguer-worker2 timings total=3.318471256s t=283ms run=0s execute=0s May 24 20:15:37.952: INFO: watch delete seen for pod-submit-status-1-6 May 24 20:15:37.952: INFO: Pod pod-submit-status-1-6 on node leguer-worker timings total=6.009940867s t=1.888s run=2s execute=0s May 24 20:15:42.881: INFO: watch delete seen for pod-submit-status-1-7 May 24 20:15:42.881: INFO: Pod pod-submit-status-1-7 on node leguer-worker timings total=4.928666168s t=270ms run=0s execute=0s May 24 20:15:47.034: INFO: watch delete seen for pod-submit-status-1-8 May 24 20:15:47.034: INFO: Pod pod-submit-status-1-8 on node leguer-worker timings total=4.152860009s t=134ms run=0s execute=0s May 24 20:15:48.024: INFO: watch delete seen for pod-submit-status-0-6 May 24 20:15:48.024: INFO: Pod pod-submit-status-0-6 on node leguer-worker timings total=13.479249643s t=841ms run=1s execute=0s May 24 20:15:48.026: INFO: watch delete seen for pod-submit-status-2-5 May 24 20:15:48.027: INFO: Pod pod-submit-status-2-5 on node leguer-worker2 timings total=11.482788751s t=1.907s run=1s execute=0s May 24 20:15:50.337: INFO: watch delete seen for pod-submit-status-1-9 May 24 20:15:50.337: INFO: Pod pod-submit-status-1-9 on node leguer-worker timings total=3.303607393s t=986ms run=1s execute=0s May 24 20:15:50.896: INFO: watch delete seen for pod-submit-status-0-7 May 24 20:15:50.896: INFO: pod pod-submit-status-0-7 on node leguer-worker2 failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766 May 24 20:15:50.896: INFO: pod pod-submit-status-0-7 on node leguer-worker2 failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766 May 24 20:15:50.896: INFO: pod pod-submit-status-0-7 on node leguer-worker2 failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766 May 24 20:15:50.896: INFO: Pod pod-submit-status-0-7 on node leguer-worker2 timings total=2.872219808s t=970ms run=1s execute=450524h15m49s May 24 20:15:55.538: INFO: watch delete seen for pod-submit-status-0-8 May 24 20:15:55.539: INFO: Pod pod-submit-status-0-8 on node leguer-worker timings total=4.642115914s t=290ms run=0s execute=0s May 24 20:15:57.901: INFO: watch delete seen for pod-submit-status-1-10 May 24 20:15:57.902: INFO: Pod pod-submit-status-1-10 on node leguer-worker timings total=7.564052687s t=694ms run=0s execute=0s May 24 20:15:57.911: INFO: watch delete seen for pod-submit-status-2-6 May 24 20:15:57.911: INFO: Pod pod-submit-status-2-6 on node leguer-worker timings total=9.884599292s t=87ms run=0s execute=0s May 24 20:15:57.927: INFO: watch delete seen for pod-submit-status-0-9 May 24 20:15:57.927: INFO: Pod pod-submit-status-0-9 on node leguer-worker2 timings total=2.388112778s t=1.247s run=1s execute=0s May 24 20:16:00.686: INFO: watch delete seen for pod-submit-status-0-10 May 24 20:16:00.686: INFO: Pod pod-submit-status-0-10 on node leguer-worker timings total=2.759387084s t=784ms run=0s execute=0s May 24 20:16:01.286: INFO: watch delete seen for pod-submit-status-2-7 May 24 20:16:01.286: INFO: Pod pod-submit-status-2-7 on node leguer-worker timings total=3.375058074s t=774ms run=0s execute=0s May 24 20:16:02.945: INFO: watch delete seen for pod-submit-status-1-11 May 24 20:16:02.945: INFO: Pod pod-submit-status-1-11 on node leguer-worker timings total=5.043543299s t=1.06s run=0s execute=0s May 24 20:16:05.287: INFO: watch delete seen for pod-submit-status-0-11 May 24 20:16:05.287: INFO: Pod pod-submit-status-0-11 on node leguer-worker timings total=4.600913972s t=1.217s run=1s execute=0s May 24 20:16:06.334: INFO: watch delete seen for pod-submit-status-1-12 May 24 20:16:06.334: INFO: Pod pod-submit-status-1-12 on node leguer-worker timings total=3.388708943s t=1.412s run=0s execute=0s May 24 20:16:06.888: INFO: watch delete seen for pod-submit-status-2-8 May 24 20:16:06.888: INFO: Pod pod-submit-status-2-8 on node leguer-worker timings total=5.601943687s t=1.749s run=1s execute=0s May 24 20:16:08.490: INFO: watch delete seen for pod-submit-status-0-12 May 24 20:16:08.490: INFO: Pod pod-submit-status-0-12 on node leguer-worker timings total=3.202665733s t=1.016s run=1s execute=0s May 24 20:16:10.527: INFO: watch delete seen for pod-submit-status-1-13 May 24 20:16:10.527: INFO: Pod pod-submit-status-1-13 on node leguer-worker timings total=4.193237629s t=1.345s run=1s execute=0s May 24 20:16:11.931: INFO: watch delete seen for pod-submit-status-2-9 May 24 20:16:11.931: INFO: Pod pod-submit-status-2-9 on node leguer-worker timings total=5.04207166s t=212ms run=0s execute=0s May 24 20:16:12.488: INFO: watch delete seen for pod-submit-status-0-13 May 24 20:16:12.488: INFO: Pod pod-submit-status-0-13 on node leguer-worker timings total=3.99813003s t=794ms run=0s execute=0s May 24 20:16:14.289: INFO: watch delete seen for pod-submit-status-1-14 May 24 20:16:14.289: INFO: Pod pod-submit-status-1-14 on node leguer-worker timings total=3.761604063s t=1.11s run=0s execute=0s May 24 20:16:15.689: INFO: watch delete seen for pod-submit-status-2-10 May 24 20:16:15.689: INFO: Pod pod-submit-status-2-10 on node leguer-worker timings total=3.758022031s t=63ms run=0s execute=0s May 24 20:16:16.725: INFO: watch delete seen for pod-submit-status-0-14 May 24 20:16:16.725: INFO: Pod pod-submit-status-0-14 on node leguer-worker timings total=4.237255148s t=555ms run=0s execute=0s May 24 20:16:27.935: INFO: watch delete seen for pod-submit-status-2-11 May 24 20:16:27.935: INFO: Pod pod-submit-status-2-11 on node leguer-worker timings total=12.246659834s t=1.762s run=1s execute=0s May 24 20:16:32.084: INFO: watch delete seen for pod-submit-status-2-12 May 24 20:16:32.084: INFO: Pod pod-submit-status-2-12 on node leguer-worker timings total=4.148629898s t=1.845s run=0s execute=0s May 24 20:16:35.126: INFO: watch delete seen for pod-submit-status-2-13 May 24 20:16:35.126: INFO: Pod pod-submit-status-2-13 on node leguer-worker timings total=3.042129904s t=512ms run=0s execute=0s May 24 20:16:39.424: INFO: watch delete seen for pod-submit-status-2-14 May 24 20:16:39.424: INFO: Pod pod-submit-status-2-14 on node leguer-worker timings total=4.297959265s t=1.182s run=0s execute=0s [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:16:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7941" for this suite. • [SLOW TEST:102.293 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":18,"failed":0} May 24 20:16:39.836: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":2,"skipped":229,"failed":0} May 24 20:15:21.757: INFO: Running AfterSuite actions on all nodes May 24 20:16:39.897: INFO: Running AfterSuite actions on node 1 May 24 20:16:39.897: INFO: Skipping dumping logs from cluster Ran 17 of 5667 Specs in 102.811 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5650 Skipped Ginkgo ran 1 suite in 1m44.824681411s Test Suite Passed