Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621941616 - Will randomize all specs Will run 5771 specs Running in parallel across 10 nodes May 25 11:20:18.489: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.494: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 11:20:18.533: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:20:18.578: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:20:18.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:20:18.578: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 11:20:18.593: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 25 11:20:18.593: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 11:20:18.593: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 25 11:20:18.593: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 11:20:18.593: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 25 11:20:18.593: INFO: e2e test version: v1.21.1 May 25 11:20:18.595: INFO: kube-apiserver version: v1.21.1 May 25 11:20:18.595: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.600: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ May 25 11:20:18.601: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.627: INFO: Cluster IP family: ipv4 S ------------------------------ May 25 11:20:18.605: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.628: INFO: Cluster IP family: ipv4 SSSS ------------------------------ May 25 11:20:18.602: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.630: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ May 25 11:20:18.615: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.637: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 25 11:20:18.621: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.642: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 25 11:20:18.635: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.653: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 25 11:20:18.634: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.654: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 25 11:20:18.640: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.658: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ May 25 11:20:18.646: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:18.664: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W0525 11:20:18.809829 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.810: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.813: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-865/configmap-test-ea22ee20-6efe-4a37-a737-b4fa59a18ced STEP: Updating configMap configmap-865/configmap-test-ea22ee20-6efe-4a37-a737-b4fa59a18ced STEP: Verifying update of ConfigMap configmap-865/configmap-test-ea22ee20-6efe-4a37-a737-b4fa59a18ced [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:18.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-865" for this suite. •SSSSS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0525 11:20:18.810657 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.810: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.814: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 25 11:20:18.822: INFO: Waiting up to 5m0s for pod "security-context-00668031-c080-4854-b42b-e0e3e067a441" in namespace "security-context-2230" to be "Succeeded or Failed" May 25 11:20:18.825: INFO: Pod "security-context-00668031-c080-4854-b42b-e0e3e067a441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022586ms May 25 11:20:20.829: INFO: Pod "security-context-00668031-c080-4854-b42b-e0e3e067a441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006975625s STEP: Saw pod success May 25 11:20:20.830: INFO: Pod "security-context-00668031-c080-4854-b42b-e0e3e067a441" satisfied condition "Succeeded or Failed" May 25 11:20:20.833: INFO: Trying to get logs from node v1.21-worker2 pod security-context-00668031-c080-4854-b42b-e0e3e067a441 container test-container: STEP: delete the pod May 25 11:20:20.863: INFO: Waiting for pod security-context-00668031-c080-4854-b42b-e0e3e067a441 to disappear May 25 11:20:20.866: INFO: Pod security-context-00668031-c080-4854-b42b-e0e3e067a441 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:20.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2230" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0525 11:20:19.005132 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:19.005: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:19.008: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 May 25 11:20:19.017: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-92e94cb0-1d14-49db-aed6-6c71038a9ebc" in namespace "security-context-test-5799" to be "Succeeded or Failed" May 25 11:20:19.019: INFO: Pod "busybox-readonly-true-92e94cb0-1d14-49db-aed6-6c71038a9ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288365ms May 25 11:20:21.023: INFO: Pod "busybox-readonly-true-92e94cb0-1d14-49db-aed6-6c71038a9ebc": Phase="Failed", Reason="", readiness=false. Elapsed: 2.006092257s May 25 11:20:21.023: INFO: Pod "busybox-readonly-true-92e94cb0-1d14-49db-aed6-6c71038a9ebc" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:21.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5799" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:21.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 25 11:20:21.253: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod May 25 11:20:21.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-5710 create -f -' May 25 11:20:21.823: INFO: stderr: "" May 25 11:20:21.823: INFO: stdout: "secret/test-secret created\n" May 25 11:20:21.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-5710 create -f -' May 25 11:20:22.124: INFO: stderr: "" May 25 11:20:22.124: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 25 11:20:24.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-5710 logs secret-test-pod test-container' May 25 11:20:24.266: INFO: stderr: "" May 25 11:20:24.266: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:24.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5710" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0525 11:20:18.664589 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.664: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.670: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 25 11:20:18.678: INFO: Waiting up to 5m0s for pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af" in namespace "security-context-3515" to be "Succeeded or Failed" May 25 11:20:18.681: INFO: Pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.609296ms May 25 11:20:20.686: INFO: Pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007111147s May 25 11:20:22.690: INFO: Pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011677207s May 25 11:20:24.695: INFO: Pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016456595s STEP: Saw pod success May 25 11:20:24.695: INFO: Pod "security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af" satisfied condition "Succeeded or Failed" May 25 11:20:24.698: INFO: Trying to get logs from node v1.21-worker pod security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af container test-container: STEP: delete the pod May 25 11:20:25.095: INFO: Waiting for pod security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af to disappear May 25 11:20:25.098: INFO: Pod security-context-cd1536ca-0d69-42b0-bc48-aad6edff84af no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:25.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3515" for this suite. • [SLOW TEST:6.470 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0525 11:20:18.826819 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.826: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.830: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars May 25 11:20:18.837: INFO: Waiting up to 5m0s for pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1" in namespace "downward-api-2500" to be "Succeeded or Failed" May 25 11:20:18.840: INFO: Pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566948ms May 25 11:20:20.844: INFO: Pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006399573s May 25 11:20:22.848: INFO: Pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010417161s May 25 11:20:24.852: INFO: Pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014939887s STEP: Saw pod success May 25 11:20:24.852: INFO: Pod "downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1" satisfied condition "Succeeded or Failed" May 25 11:20:24.856: INFO: Trying to get logs from node v1.21-worker pod downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1 container dapi-container: STEP: delete the pod May 25 11:20:25.297: INFO: Waiting for pod downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1 to disappear May 25 11:20:25.300: INFO: Pod downward-api-c6aa827d-63d2-4e41-8e8a-6d95df79a5e1 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:25.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2500" for this suite. • [SLOW TEST:6.507 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:24.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:26.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1766" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":3,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0525 11:20:18.864060 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.864: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.867: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 25 11:20:18.875: INFO: Waiting up to 5m0s for pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f" in namespace "security-context-8592" to be "Succeeded or Failed" May 25 11:20:18.878: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329222ms May 25 11:20:20.881: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005657254s May 25 11:20:22.886: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010633499s May 25 11:20:24.890: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014857411s May 25 11:20:26.894: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019043323s STEP: Saw pod success May 25 11:20:26.894: INFO: Pod "security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f" satisfied condition "Succeeded or Failed" May 25 11:20:26.897: INFO: Trying to get logs from node v1.21-worker pod security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f container test-container: STEP: delete the pod May 25 11:20:26.912: INFO: Waiting for pod security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f to disappear May 25 11:20:26.915: INFO: Pod security-context-92bb420b-e7da-489f-8e9d-e1bd01fff61f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:26.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8592" for this suite. • [SLOW TEST:8.080 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0525 11:20:18.957289 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.957: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.960: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 25 11:20:18.968: INFO: Waiting up to 5m0s for pod "security-context-d6c514da-576e-4916-b099-713618fb95c5" in namespace "security-context-4710" to be "Succeeded or Failed" May 25 11:20:18.970: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261782ms May 25 11:20:20.974: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006326736s May 25 11:20:22.978: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010362332s May 25 11:20:24.983: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015004373s May 25 11:20:26.987: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019546505s STEP: Saw pod success May 25 11:20:26.987: INFO: Pod "security-context-d6c514da-576e-4916-b099-713618fb95c5" satisfied condition "Succeeded or Failed" May 25 11:20:26.991: INFO: Trying to get logs from node v1.21-worker pod security-context-d6c514da-576e-4916-b099-713618fb95c5 container test-container: STEP: delete the pod May 25 11:20:27.005: INFO: Waiting for pod security-context-d6c514da-576e-4916-b099-713618fb95c5 to disappear May 25 11:20:27.008: INFO: Pod security-context-d6c514da-576e-4916-b099-713618fb95c5 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:27.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4710" for this suite. • [SLOW TEST:8.087 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":1,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod W0525 11:20:18.827786 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.827: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.831: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container May 25 11:20:18.841: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:20.845: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:22.846: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:24.846: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:26.846: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container May 25 11:20:26.849: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-925 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:26.849: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:27.001: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-925 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:27.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 25 11:20:27.196: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-925 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:27.196: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:27.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-925" for this suite. • [SLOW TEST:8.528 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:19.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0525 11:20:19.105774 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:19.105: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:19.109: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 25 11:20:19.119: INFO: Waiting up to 5m0s for pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643" in namespace "security-context-7737" to be "Succeeded or Failed" May 25 11:20:19.122: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812171ms May 25 11:20:21.126: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006683555s May 25 11:20:23.131: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011391876s May 25 11:20:25.135: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015793553s May 25 11:20:27.139: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019920012s May 25 11:20:29.143: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024320008s STEP: Saw pod success May 25 11:20:29.144: INFO: Pod "security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643" satisfied condition "Succeeded or Failed" May 25 11:20:29.147: INFO: Trying to get logs from node v1.21-worker pod security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643 container test-container: STEP: delete the pod May 25 11:20:29.161: INFO: Waiting for pod security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643 to disappear May 25 11:20:29.164: INFO: Pod security-context-05f2ae50-73cd-4ee9-b714-7fcebf315643 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:29.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7737" for this suite. • [SLOW TEST:10.098 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":272,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:27.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:29.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4127" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:26.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 11:20:33.281: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:33.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5869" for this suite. • [SLOW TEST:7.986 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":328,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:29.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 May 25 11:20:29.479: INFO: Waiting up to 5m0s for pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26" in namespace "security-context-test-3370" to be "Succeeded or Failed" May 25 11:20:29.482: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463506ms May 25 11:20:31.678: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198054357s May 25 11:20:33.878: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398729741s May 25 11:20:35.987: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.507622047s May 25 11:20:37.992: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.512588653s May 25 11:20:37.992: INFO: Pod "busybox-user-0-0bb0b72f-3ff8-4a07-8165-788e18fe2e26" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:37.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3370" for this suite. • [SLOW TEST:8.555 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:34.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:38.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9075" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:25.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 May 25 11:20:25.585: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:27.589: INFO: The status of Pod master is Running (Ready = true) May 25 11:20:27.600: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:29.603: INFO: The status of Pod slave is Running (Ready = true) May 25 11:20:29.613: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:31.678: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:33.879: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:35.678: INFO: The status of Pod private is Running (Ready = true) May 25 11:20:35.987: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:37.992: INFO: The status of Pod default is Running (Ready = true) May 25 11:20:37.998: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:37.998: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.147: INFO: Exec stderr: "" May 25 11:20:38.150: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.150: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.286: INFO: Exec stderr: "" May 25 11:20:38.290: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.290: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.428: INFO: Exec stderr: "" May 25 11:20:38.431: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.431: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.577: INFO: Exec stderr: "" May 25 11:20:38.616: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.616: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.759: INFO: Exec stderr: "" May 25 11:20:38.762: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.762: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:38.900: INFO: Exec stderr: "" May 25 11:20:38.903: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:38.904: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.044: INFO: Exec stderr: "" May 25 11:20:39.048: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.048: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.165: INFO: Exec stderr: "" May 25 11:20:39.168: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.168: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.329: INFO: Exec stderr: "" May 25 11:20:39.333: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.333: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.474: INFO: Exec stderr: "" May 25 11:20:39.477: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.477: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.616: INFO: Exec stderr: "" May 25 11:20:39.619: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.620: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.759: INFO: Exec stderr: "" May 25 11:20:39.762: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.762: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:39.889: INFO: Exec stderr: "" May 25 11:20:39.893: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:39.893: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.044: INFO: Exec stderr: "" May 25 11:20:40.047: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.047: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.194: INFO: Exec stderr: "" May 25 11:20:40.196: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.196: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.321: INFO: Exec stderr: "" May 25 11:20:40.324: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.324: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.453: INFO: Exec stderr: "" May 25 11:20:40.456: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.456: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.592: INFO: Exec stderr: "" May 25 11:20:40.596: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.596: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.737: INFO: Exec stderr: "" May 25 11:20:40.740: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:40.740: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:40.891: INFO: Exec stderr: "" May 25 11:20:42.904: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-5548"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-5548"/host; echo host > "/var/lib/kubelet/mount-propagation-5548"/host/file] Namespace:mount-propagation-5548 PodName:hostexec-v1.21-worker2-q96gq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 11:20:42.905: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.065: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.065: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.195: INFO: pod slave mount master: stdout: "master", stderr: "" error: May 25 11:20:43.199: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.199: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.334: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: May 25 11:20:43.337: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.337: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.476: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:43.479: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.479: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.619: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:43.622: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.622: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.753: INFO: pod slave mount host: stdout: "host", stderr: "" error: May 25 11:20:43.755: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.755: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:43.897: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:43.901: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:43.901: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.028: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.032: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.032: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.172: INFO: pod private mount private: stdout: "private", stderr: "" error: May 25 11:20:44.175: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.175: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.312: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.315: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.315: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.459: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.462: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.462: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.579: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.582: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.582: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.727: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.730: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.730: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:44.875: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:44.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:44.878: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.015: INFO: pod default mount default: stdout: "default", stderr: "" error: May 25 11:20:45.020: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.020: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.159: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:45.162: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.162: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.307: INFO: pod master mount master: stdout: "master", stderr: "" error: May 25 11:20:45.311: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.311: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.444: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:45.448: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.448: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.594: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:45.597: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.597: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.748: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 25 11:20:45.751: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:45.751: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:45.871: INFO: pod master mount host: stdout: "host", stderr: "" error: May 25 11:20:45.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-5548"/master/file` = master] Namespace:mount-propagation-5548 PodName:hostexec-v1.21-worker2-q96gq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 11:20:45.872: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.011: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-5548"/slave/file] Namespace:mount-propagation-5548 PodName:hostexec-v1.21-worker2-q96gq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 11:20:46.011: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.138: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-5548"/host] Namespace:mount-propagation-5548 PodName:hostexec-v1.21-worker2-q96gq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 11:20:46.138: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.271: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-5548 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:46.271: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.415: INFO: Exec stderr: "" May 25 11:20:46.418: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-5548 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:46.418: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.550: INFO: Exec stderr: "" May 25 11:20:46.553: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-5548 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:46.553: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.688: INFO: Exec stderr: "" May 25 11:20:46.691: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-5548 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 11:20:46.691: INFO: >>> kubeConfig: /root/.kube/config May 25 11:20:46.836: INFO: Exec stderr: "" May 25 11:20:46.836: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-5548"] Namespace:mount-propagation-5548 PodName:hostexec-v1.21-worker2-q96gq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 11:20:46.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-v1.21-worker2-q96gq in namespace mount-propagation-5548 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:46.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-5548" for this suite. • [SLOW TEST:21.458 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":2,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:47.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 25 11:20:47.093: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:47.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-7816" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:47.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:47.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2794" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:47.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:50.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3614" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:27.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-83539e6f-30ea-41dd-9e41-584c80139dde in namespace container-probe-6552 May 25 11:20:35.679: INFO: Started pod liveness-override-83539e6f-30ea-41dd-9e41-584c80139dde in namespace container-probe-6552 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:35.683: INFO: Initial restart count of pod liveness-override-83539e6f-30ea-41dd-9e41-584c80139dde is 1 May 25 11:20:53.788: INFO: Restart count of pod container-probe-6552/liveness-override-83539e6f-30ea-41dd-9e41-584c80139dde is now 2 (18.105636988s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:53.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6552" for this suite. • [SLOW TEST:26.376 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:38.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:54.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4259" for this suite. • [SLOW TEST:16.100 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":4,"skipped":420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:54.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:54.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7417" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":5,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:53.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 May 25 11:20:53.898: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-83c41f3e-e153-4c47-8cf0-6842d0ea1ba5" in namespace "security-context-test-2683" to be "Succeeded or Failed" May 25 11:20:53.901: INFO: Pod "alpine-nnp-true-83c41f3e-e153-4c47-8cf0-6842d0ea1ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971358ms May 25 11:20:55.906: INFO: Pod "alpine-nnp-true-83c41f3e-e153-4c47-8cf0-6842d0ea1ba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007560102s May 25 11:20:55.906: INFO: Pod "alpine-nnp-true-83c41f3e-e153-4c47-8cf0-6842d0ea1ba5" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:20:55.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2683" for this suite. • ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:25.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-ea2fe191-4f35-4bf8-b990-b0f4572f6185 in namespace container-probe-6141 May 25 11:20:27.469: INFO: Started pod busybox-ea2fe191-4f35-4bf8-b990-b0f4572f6185 in namespace container-probe-6141 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:27.472: INFO: Initial restart count of pod busybox-ea2fe191-4f35-4bf8-b990-b0f4572f6185 is 0 May 25 11:21:18.082: INFO: Restart count of pod container-probe-6141/busybox-ea2fe191-4f35-4bf8-b990-b0f4572f6185 is now 1 (50.610027954s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6141" for this suite. • [SLOW TEST:52.679 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-16104119-dd86-4b7c-a286-e6a54fea0b19 in namespace container-probe-3844 May 25 11:20:26.957: INFO: Started pod startup-16104119-dd86-4b7c-a286-e6a54fea0b19 in namespace container-probe-3844 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:26.960: INFO: Initial restart count of pod startup-16104119-dd86-4b7c-a286-e6a54fea0b19 is 0 May 25 11:21:29.884: INFO: Restart count of pod container-probe-3844/startup-16104119-dd86-4b7c-a286-e6a54fea0b19 is now 1 (1m2.923301195s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:30.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3844" for this suite. • [SLOW TEST:71.668 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":2,"skipped":113,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":153,"failed":0} [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:55.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-b8667573-bad3-47ec-bbec-51980669960e in namespace kubelet-5557 I0525 11:20:55.993334 21 runners.go:190] Created replication controller with name: cleanup20-b8667573-bad3-47ec-bbec-51980669960e, namespace: kubelet-5557, replica count: 20 May 25 11:20:56.181: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:20:56.215: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:20:56.233: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" May 25 11:21:01.425: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:01.549: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:01.556: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" I0525 11:21:06.044541 21 runners.go:190] cleanup20-b8667573-bad3-47ec-bbec-51980669960e Pods: 20 out of 20 created, 15 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 11:21:06.663: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:06.861: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:06.894: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" May 25 11:21:11.887: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:12.167: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:12.234: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" I0525 11:21:16.047132 21 runners.go:190] cleanup20-b8667573-bad3-47ec-bbec-51980669960e Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 11:21:17.048: INFO: Checking pods on node v1.21-worker2 via /runningpods endpoint May 25 11:21:17.048: INFO: Checking pods on node v1.21-worker via /runningpods endpoint May 25 11:21:17.077: INFO: [Resource usage on node "v1.21-control-plane" is not ready yet, Resource usage on node "v1.21-worker" is not ready yet, Resource usage on node "v1.21-worker2" is not ready yet] May 25 11:21:17.077: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-b8667573-bad3-47ec-bbec-51980669960e in namespace kubelet-5557, will wait for the garbage collector to delete the pods May 25 11:21:17.137: INFO: Deleting ReplicationController cleanup20-b8667573-bad3-47ec-bbec-51980669960e took: 5.488753ms May 25 11:21:17.165: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:17.437: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:17.503: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" May 25 11:21:17.738: INFO: Terminating ReplicationController cleanup20-b8667573-bad3-47ec-bbec-51980669960e pods took: 600.846326ms May 25 11:21:22.384: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:22.654: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:22.754: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" May 25 11:21:27.704: INFO: Missing info/stats for container "runtime" on node "v1.21-control-plane" May 25 11:21:28.003: INFO: Missing info/stats for container "runtime" on node "v1.21-worker" May 25 11:21:28.087: INFO: Missing info/stats for container "runtime" on node "v1.21-worker2" May 25 11:21:29.439: INFO: Checking pods on node v1.21-worker2 via /runningpods endpoint May 25 11:21:29.439: INFO: Checking pods on node v1.21-worker via /runningpods endpoint May 25 11:21:29.690: INFO: Deleting 20 pods on 2 nodes completed in 1.250979423s after the RC was deleted May 25 11:21:29.690: INFO: CPU usage of containers on node "v1.21-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.858 1.707 1.707 1.707 1.707 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "v1.21-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.578 1.051 1.051 1.051 1.051 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "v1.21-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.636 0.815 0.815 0.815 0.815 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node v1.21-worker STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:30.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-5557" for this suite. • [SLOW TEST:34.666 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":4,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:18.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0525 11:20:18.978687 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:20:18.978: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:20:18.982: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-70c1b788-dd79-43fa-a094-ddf589252b0f in namespace container-probe-3647 May 25 11:20:20.998: INFO: Started pod startup-70c1b788-dd79-43fa-a094-ddf589252b0f in namespace container-probe-3647 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:21.006: INFO: Initial restart count of pod startup-70c1b788-dd79-43fa-a094-ddf589252b0f is 0 May 25 11:21:36.183: INFO: Restart count of pod container-probe-3647/startup-70c1b788-dd79-43fa-a094-ddf589252b0f is now 1 (1m15.176972267s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:36.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3647" for this suite. • [SLOW TEST:77.244 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":202,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:30.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 May 25 11:21:30.984: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5086" to be "Succeeded or Failed" May 25 11:21:31.289: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 304.588137ms May 25 11:21:33.481: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497283332s May 25 11:21:35.486: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501921593s May 25 11:21:37.490: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.505726202s May 25 11:21:37.490: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:37.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5086" for this suite. • [SLOW TEST:6.891 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":5,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:36.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups May 25 11:21:36.248: INFO: Waiting up to 5m0s for pod "security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8" in namespace "security-context-9824" to be "Succeeded or Failed" May 25 11:21:36.251: INFO: Pod "security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310484ms May 25 11:21:38.255: INFO: Pod "security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006936679s STEP: Saw pod success May 25 11:21:38.255: INFO: Pod "security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8" satisfied condition "Succeeded or Failed" May 25 11:21:38.259: INFO: Trying to get logs from node v1.21-worker pod security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8 container test-container: STEP: delete the pod May 25 11:21:38.272: INFO: Waiting for pod security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8 to disappear May 25 11:21:38.276: INFO: Pod security-context-479b62c4-dc11-4a63-8ec2-5643e0db92f8 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:38.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9824" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":2,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:38.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 May 25 11:21:38.589: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:38.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-853" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:30.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes May 25 11:21:31.489: INFO: Waiting up to 5m0s for pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd" in namespace "pods-687" to be "Succeeded or Failed" May 25 11:21:31.579: INFO: Pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 89.322165ms May 25 11:21:33.683: INFO: Pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193466351s May 25 11:21:35.688: INFO: Pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198475086s May 25 11:21:37.692: INFO: Pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203008402s STEP: Saw pod success May 25 11:21:37.693: INFO: Pod "pod-always-succeed65a99aa7-4fd0-4ee8-9279-8797a1e10bfd" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:39.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-687" for this suite. • [SLOW TEST:8.988 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:21.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready May 25 11:20:21.172: INFO: Waiting up to 5m0s for node v1.21-worker condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 25 11:20:22.187: INFO: node status heartbeat is unchanged for 1.004630841s, waiting for 1m20s May 25 11:20:23.187: INFO: node status heartbeat is unchanged for 2.004774774s, waiting for 1m20s May 25 11:20:24.187: INFO: node status heartbeat is unchanged for 3.004870338s, waiting for 1m20s May 25 11:20:25.187: INFO: node status heartbeat is unchanged for 4.0044836s, waiting for 1m20s May 25 11:20:26.186: INFO: node status heartbeat is unchanged for 5.004120339s, waiting for 1m20s May 25 11:20:27.187: INFO: node status heartbeat is unchanged for 6.004332638s, waiting for 1m20s May 25 11:20:28.187: INFO: node status heartbeat is unchanged for 7.004367607s, waiting for 1m20s May 25 11:20:29.186: INFO: node status heartbeat is unchanged for 8.004045726s, waiting for 1m20s May 25 11:20:30.186: INFO: node status heartbeat is unchanged for 9.004278292s, waiting for 1m20s May 25 11:20:31.291: INFO: node status heartbeat is unchanged for 10.108369969s, waiting for 1m20s May 25 11:20:32.286: INFO: node status heartbeat is unchanged for 11.103987624s, waiting for 1m20s May 25 11:20:33.282: INFO: node status heartbeat is unchanged for 12.099394845s, waiting for 1m20s May 25 11:20:34.279: INFO: node status heartbeat is unchanged for 13.097034259s, waiting for 1m20s May 25 11:20:35.186: INFO: node status heartbeat is unchanged for 14.004152372s, waiting for 1m20s May 25 11:20:36.278: INFO: node status heartbeat is unchanged for 15.096136927s, waiting for 1m20s May 25 11:20:37.187: INFO: node status heartbeat is unchanged for 16.004545137s, waiting for 1m20s May 25 11:20:38.186: INFO: node status heartbeat is unchanged for 17.003804663s, waiting for 1m20s May 25 11:20:39.186: INFO: node status heartbeat is unchanged for 18.004185736s, waiting for 1m20s May 25 11:20:40.187: INFO: node status heartbeat is unchanged for 19.004434055s, waiting for 1m20s May 25 11:20:41.187: INFO: node status heartbeat is unchanged for 20.004677928s, waiting for 1m20s May 25 11:20:42.187: INFO: node status heartbeat is unchanged for 21.00433923s, waiting for 1m20s May 25 11:20:43.187: INFO: node status heartbeat is unchanged for 22.00473075s, waiting for 1m20s May 25 11:20:44.186: INFO: node status heartbeat is unchanged for 23.004286125s, waiting for 1m20s May 25 11:20:45.187: INFO: node status heartbeat is unchanged for 24.005302432s, waiting for 1m20s May 25 11:20:46.187: INFO: node status heartbeat is unchanged for 25.005024862s, waiting for 1m20s May 25 11:20:47.187: INFO: node status heartbeat is unchanged for 26.004367457s, waiting for 1m20s May 25 11:20:48.187: INFO: node status heartbeat is unchanged for 27.004607644s, waiting for 1m20s May 25 11:20:49.187: INFO: node status heartbeat is unchanged for 28.00496996s, waiting for 1m20s May 25 11:20:50.187: INFO: node status heartbeat is unchanged for 29.004418699s, waiting for 1m20s May 25 11:20:51.188: INFO: node status heartbeat is unchanged for 30.005999541s, waiting for 1m20s May 25 11:20:52.186: INFO: node status heartbeat is unchanged for 31.004282335s, waiting for 1m20s May 25 11:20:53.188: INFO: node status heartbeat is unchanged for 32.005918569s, waiting for 1m20s May 25 11:20:54.187: INFO: node status heartbeat is unchanged for 33.004544744s, waiting for 1m20s May 25 11:20:55.186: INFO: node status heartbeat is unchanged for 34.004289272s, waiting for 1m20s May 25 11:20:56.187: INFO: node status heartbeat is unchanged for 35.004724823s, waiting for 1m20s May 25 11:20:57.188: INFO: node status heartbeat is unchanged for 36.005420333s, waiting for 1m20s May 25 11:20:58.189: INFO: node status heartbeat is unchanged for 37.006774754s, waiting for 1m20s May 25 11:20:59.186: INFO: node status heartbeat is unchanged for 38.004247085s, waiting for 1m20s May 25 11:21:00.187: INFO: node status heartbeat is unchanged for 39.005051587s, waiting for 1m20s May 25 11:21:01.187: INFO: node status heartbeat is unchanged for 40.004661662s, waiting for 1m20s May 25 11:21:02.187: INFO: node status heartbeat is unchanged for 41.00447177s, waiting for 1m20s May 25 11:21:03.187: INFO: node status heartbeat is unchanged for 42.005154665s, waiting for 1m20s May 25 11:21:04.191: INFO: node status heartbeat is unchanged for 43.008372986s, waiting for 1m20s May 25 11:21:05.188: INFO: node status heartbeat is unchanged for 44.005964429s, waiting for 1m20s May 25 11:21:06.187: INFO: node status heartbeat is unchanged for 45.005083979s, waiting for 1m20s May 25 11:21:07.188: INFO: node status heartbeat is unchanged for 46.006142557s, waiting for 1m20s May 25 11:21:08.187: INFO: node status heartbeat is unchanged for 47.005260379s, waiting for 1m20s May 25 11:21:09.187: INFO: node status heartbeat is unchanged for 48.004361704s, waiting for 1m20s May 25 11:21:10.187: INFO: node status heartbeat is unchanged for 49.004981122s, waiting for 1m20s May 25 11:21:11.188: INFO: node status heartbeat is unchanged for 50.005448717s, waiting for 1m20s May 25 11:21:12.187: INFO: node status heartbeat is unchanged for 51.004581491s, waiting for 1m20s May 25 11:21:13.186: INFO: node status heartbeat is unchanged for 52.004033023s, waiting for 1m20s May 25 11:21:14.187: INFO: node status heartbeat is unchanged for 53.00486743s, waiting for 1m20s May 25 11:21:15.187: INFO: node status heartbeat is unchanged for 54.004773256s, waiting for 1m20s May 25 11:21:16.187: INFO: node status heartbeat is unchanged for 55.004621606s, waiting for 1m20s May 25 11:21:17.186: INFO: node status heartbeat is unchanged for 56.004228562s, waiting for 1m20s May 25 11:21:18.186: INFO: node status heartbeat is unchanged for 57.004070155s, waiting for 1m20s May 25 11:21:19.186: INFO: node status heartbeat is unchanged for 58.00377342s, waiting for 1m20s May 25 11:21:20.187: INFO: node status heartbeat is unchanged for 59.004761305s, waiting for 1m20s May 25 11:21:21.187: INFO: node status heartbeat is unchanged for 1m0.00436022s, waiting for 1m20s May 25 11:21:22.186: INFO: node status heartbeat is unchanged for 1m1.003982081s, waiting for 1m20s May 25 11:21:23.187: INFO: node status heartbeat is unchanged for 1m2.00474736s, waiting for 1m20s May 25 11:21:24.187: INFO: node status heartbeat is unchanged for 1m3.004694016s, waiting for 1m20s May 25 11:21:25.187: INFO: node status heartbeat is unchanged for 1m4.004739223s, waiting for 1m20s May 25 11:21:26.187: INFO: node status heartbeat is unchanged for 1m5.004881482s, waiting for 1m20s May 25 11:21:27.381: INFO: node status heartbeat is unchanged for 1m6.198600699s, waiting for 1m20s May 25 11:21:28.186: INFO: node status heartbeat is unchanged for 1m7.00373735s, waiting for 1m20s May 25 11:21:29.379: INFO: node status heartbeat is unchanged for 1m8.196883335s, waiting for 1m20s May 25 11:21:30.379: INFO: node status heartbeat is unchanged for 1m9.196570493s, waiting for 1m20s May 25 11:21:31.289: INFO: node status heartbeat is unchanged for 1m10.10695956s, waiting for 1m20s May 25 11:21:32.278: INFO: node status heartbeat is unchanged for 1m11.09628057s, waiting for 1m20s May 25 11:21:33.481: INFO: node status heartbeat is unchanged for 1m12.298847221s, waiting for 1m20s May 25 11:21:34.186: INFO: node status heartbeat is unchanged for 1m13.003744552s, waiting for 1m20s May 25 11:21:35.187: INFO: node status heartbeat is unchanged for 1m14.004598554s, waiting for 1m20s May 25 11:21:36.186: INFO: node status heartbeat is unchanged for 1m15.004223934s, waiting for 1m20s May 25 11:21:37.187: INFO: node status heartbeat is unchanged for 1m16.004737795s, waiting for 1m20s May 25 11:21:38.186: INFO: node status heartbeat is unchanged for 1m17.004237468s, waiting for 1m20s May 25 11:21:39.186: INFO: node status heartbeat is unchanged for 1m18.004154215s, waiting for 1m20s May 25 11:21:40.187: INFO: node status heartbeat is unchanged for 1m19.005185841s, waiting for 1m20s May 25 11:21:41.187: INFO: node status heartbeat is unchanged for 1m20.004376588s, was waiting for at least 1m20s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:41.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8695" for this suite. • [SLOW TEST:80.072 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":267,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:37.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-40fff421-1ba2-453c-a83f-56868efc69fb in namespace container-probe-9150 May 25 11:21:39.577: INFO: Started pod startup-override-40fff421-1ba2-453c-a83f-56868efc69fb in namespace container-probe-9150 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:21:39.581: INFO: Initial restart count of pod startup-override-40fff421-1ba2-453c-a83f-56868efc69fb is 0 May 25 11:21:41.588: INFO: Restart count of pod container-probe-9150/startup-override-40fff421-1ba2-453c-a83f-56868efc69fb is now 1 (2.0068091s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:41.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9150" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":181,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:41.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 25 11:21:41.658: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:41.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-7881" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:39.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-736af477-d307-4030-a8e5-8d38842d1e63 in namespace container-probe-5660 May 25 11:20:43.277: INFO: Started pod busybox-736af477-d307-4030-a8e5-8d38842d1e63 in namespace container-probe-5660 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:43.280: INFO: Initial restart count of pod busybox-736af477-d307-4030-a8e5-8d38842d1e63 is 0 May 25 11:21:42.196: INFO: Restart count of pod container-probe-5660/busybox-736af477-d307-4030-a8e5-8d38842d1e63 is now 1 (58.916940953s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5660" for this suite. • [SLOW TEST:62.990 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":6,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:41.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 25 11:21:41.267: INFO: Waiting up to 5m0s for pod "security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4" in namespace "security-context-6916" to be "Succeeded or Failed" May 25 11:21:41.270: INFO: Pod "security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.528142ms May 25 11:21:43.276: INFO: Pod "security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009112236s STEP: Saw pod success May 25 11:21:43.276: INFO: Pod "security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4" satisfied condition "Succeeded or Failed" May 25 11:21:43.280: INFO: Trying to get logs from node v1.21-worker2 pod security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4 container test-container: STEP: delete the pod May 25 11:21:43.295: INFO: Waiting for pod security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4 to disappear May 25 11:21:43.298: INFO: Pod security-context-2c8ed109-8b1a-480a-a4ae-c91e657e87e4 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:43.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6916" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":3,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:42.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:44.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8046" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:43.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 May 25 11:21:43.571: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-93241e53-8ed1-46a1-ad86-aff3fc445d58" in namespace "security-context-test-1595" to be "Succeeded or Failed" May 25 11:21:43.574: INFO: Pod "alpine-nnp-nil-93241e53-8ed1-46a1-ad86-aff3fc445d58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.530294ms May 25 11:21:45.578: INFO: Pod "alpine-nnp-nil-93241e53-8ed1-46a1-ad86-aff3fc445d58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006897101s May 25 11:21:45.578: INFO: Pod "alpine-nnp-nil-93241e53-8ed1-46a1-ad86-aff3fc445d58" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:45.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1595" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:44.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:47.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6256" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":8,"skipped":962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:45.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:48.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4079" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":5,"skipped":613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:41.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 25 11:21:49.078: INFO: start=2021-05-25 11:21:44.059575662 +0000 UTC m=+87.596695569, now=2021-05-25 11:21:49.078531723 +0000 UTC m=+92.615651621, kubelet pod: {"metadata":{"name":"pod-submit-remove-b2d0c365-8f80-4b85-a3d1-f30d8356d4b3","namespace":"pods-7949","uid":"26f64680-bbbd-4782-ae0c-29f28cebce6c","resourceVersion":"577686","creationTimestamp":"2021-05-25T11:21:42Z","deletionTimestamp":"2021-05-25T11:22:14Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"31750651"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.233\"\n ],\n \"mac\": \"e6:d7:98:23:d4:67\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.233\"\n ],\n \"mac\": \"e6:d7:98:23:d4:67\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-05-25T11:21:42.047712882Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-05-25T11:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bfq2b","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bfq2b","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v1.21-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:42Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:46Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:46Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:42Z"}],"hostIP":"172.18.0.2","podIP":"10.244.2.233","podIPs":[{"ip":"10.244.2.233"}],"startTime":"2021-05-25T11:21:42Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-05-25T11:21:42Z","finishedAt":"2021-05-25T11:21:45Z","containerID":"containerd://a1a7df776b1560170645311443bb75530c2617b8e3f67e76652a27e5bc80e9e1"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://a1a7df776b1560170645311443bb75530c2617b8e3f67e76652a27e5bc80e9e1","started":false}],"qosClass":"BestEffort"}} May 25 11:21:54.076: INFO: start=2021-05-25 11:21:44.059575662 +0000 UTC m=+87.596695569, now=2021-05-25 11:21:54.076047486 +0000 UTC m=+97.613167390, kubelet pod: {"metadata":{"name":"pod-submit-remove-b2d0c365-8f80-4b85-a3d1-f30d8356d4b3","namespace":"pods-7949","uid":"26f64680-bbbd-4782-ae0c-29f28cebce6c","resourceVersion":"577686","creationTimestamp":"2021-05-25T11:21:42Z","deletionTimestamp":"2021-05-25T11:22:14Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"31750651"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.233\"\n ],\n \"mac\": \"e6:d7:98:23:d4:67\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.233\"\n ],\n \"mac\": \"e6:d7:98:23:d4:67\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-05-25T11:21:42.047712882Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-05-25T11:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bfq2b","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bfq2b","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v1.21-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:42Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:46Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:46Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-05-25T11:21:42Z"}],"hostIP":"172.18.0.2","podIP":"10.244.2.233","podIPs":[{"ip":"10.244.2.233"}],"startTime":"2021-05-25T11:21:42Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-05-25T11:21:42Z","finishedAt":"2021-05-25T11:21:45Z","containerID":"containerd://a1a7df776b1560170645311443bb75530c2617b8e3f67e76652a27e5bc80e9e1"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://a1a7df776b1560170645311443bb75530c2617b8e3f67e76652a27e5bc80e9e1","started":false}],"qosClass":"BestEffort"}} May 25 11:21:59.074: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:59.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7949" for this suite. • [SLOW TEST:17.099 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":7,"skipped":368,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:59.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 May 25 11:21:59.140: INFO: Only supported for providers [gce gke aws local] (not skeleton) [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:21:59.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-4413" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 Only supported for providers [gce gke aws local] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:38 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:38.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 May 25 11:22:00.741: INFO: The status of Pod startup-60674628-8fee-4e47-9eab-e93d2b13f9c9 is Running (Ready = true) May 25 11:22:00.745: INFO: Container started at 2021-05-25 11:22:00.738816886 +0000 UTC m=+104.276482068, pod became ready at 2021-05-25 11:22:00.741952493 +0000 UTC m=+104.279617674, 3.135606ms after startupProbe succeeded [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:00.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3886" for this suite. • [SLOW TEST:22.067 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":3,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:22:01.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 May 25 11:22:01.583: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:01.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-851" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.426 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:39.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination May 25 11:22:01.903: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:01.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3242" for this suite. • [SLOW TEST:22.282 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":4,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:59.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 25 11:21:59.414: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod May 25 11:21:59.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-9853 create -f -' May 25 11:21:59.850: INFO: stderr: "" May 25 11:21:59.850: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 25 11:22:01.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-9853 logs dapi-test-pod test-container' May 25 11:22:02.190: INFO: stderr: "" May 25 11:22:02.276: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9853\nMY_POD_IP=10.244.1.241\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.4\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 25 11:22:02.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-9853 logs dapi-test-pod test-container' May 25 11:22:02.414: INFO: stderr: "" May 25 11:22:02.414: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9853\nMY_POD_IP=10.244.1.241\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.4\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9853" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":8,"skipped":495,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:22:02.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 May 25 11:22:02.409: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-e8cc91cb-e746-41bd-b3d5-686b4bf69da4" in namespace "security-context-test-4357" to be "Succeeded or Failed" May 25 11:22:02.412: INFO: Pod "busybox-privileged-true-e8cc91cb-e746-41bd-b3d5-686b4bf69da4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.273782ms May 25 11:22:04.415: INFO: Pod "busybox-privileged-true-e8cc91cb-e746-41bd-b3d5-686b4bf69da4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006896113s May 25 11:22:04.416: INFO: Pod "busybox-privileged-true-e8cc91cb-e746-41bd-b3d5-686b4bf69da4" satisfied condition "Succeeded or Failed" May 25 11:22:04.421: INFO: Got logs for pod "busybox-privileged-true-e8cc91cb-e746-41bd-b3d5-686b4bf69da4": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:04.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4357" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:22:02.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 May 25 11:22:02.459: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3893" to be "Succeeded or Failed" May 25 11:22:02.462: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 3.11512ms May 25 11:22:04.467: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007441101s May 25 11:22:04.467: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:04.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3893" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":9,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 25 11:22:04.510: INFO: Running AfterSuite actions on all nodes S ------------------------------ May 25 11:22:04.511: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:48.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-728c0696-df06-4c5e-b39e-5f97f3b6de6f in namespace container-probe-3238 May 25 11:21:54.546: INFO: Started pod liveness-728c0696-df06-4c5e-b39e-5f97f3b6de6f in namespace container-probe-3238 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:21:54.549: INFO: Initial restart count of pod liveness-728c0696-df06-4c5e-b39e-5f97f3b6de6f is 0 May 25 11:22:10.586: INFO: Restart count of pod container-probe-3238/liveness-728c0696-df06-4c5e-b39e-5f97f3b6de6f is now 1 (16.036759288s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3238" for this suite. • [SLOW TEST:22.111 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":9,"skipped":1240,"failed":0} May 25 11:22:10.607: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:18.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 25 11:21:18.834: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 May 25 11:21:18.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-9676 create -f -' May 25 11:21:19.233: INFO: stderr: "" May 25 11:21:19.233: INFO: stdout: "pod/liveness-exec created\n" May 25 11:21:19.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=examples-9676 create -f -' May 25 11:21:19.539: INFO: stderr: "" May 25 11:21:19.539: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 25 11:21:27.878: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:27.879: INFO: Pod: liveness-http, restart count:0 May 25 11:21:29.883: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:29.883: INFO: Pod: liveness-http, restart count:0 May 25 11:21:31.888: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:31.888: INFO: Pod: liveness-http, restart count:0 May 25 11:21:34.179: INFO: Pod: liveness-http, restart count:0 May 25 11:21:34.179: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:36.184: INFO: Pod: liveness-http, restart count:0 May 25 11:21:36.184: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:38.188: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:38.188: INFO: Pod: liveness-http, restart count:0 May 25 11:21:40.191: INFO: Pod: liveness-http, restart count:0 May 25 11:21:40.191: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:42.196: INFO: Pod: liveness-http, restart count:0 May 25 11:21:42.196: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:44.201: INFO: Pod: liveness-http, restart count:0 May 25 11:21:44.201: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:46.206: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:46.206: INFO: Pod: liveness-http, restart count:0 May 25 11:21:48.211: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:48.211: INFO: Pod: liveness-http, restart count:0 May 25 11:21:50.216: INFO: Pod: liveness-http, restart count:0 May 25 11:21:50.216: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:52.220: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:52.221: INFO: Pod: liveness-http, restart count:0 May 25 11:21:54.225: INFO: Pod: liveness-http, restart count:0 May 25 11:21:54.225: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:56.230: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:56.230: INFO: Pod: liveness-http, restart count:0 May 25 11:21:58.235: INFO: Pod: liveness-exec, restart count:0 May 25 11:21:58.235: INFO: Pod: liveness-http, restart count:0 May 25 11:22:00.240: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:00.240: INFO: Pod: liveness-http, restart count:0 May 25 11:22:02.379: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:02.379: INFO: Pod: liveness-http, restart count:1 May 25 11:22:02.380: INFO: Saw liveness-http restart, succeeded... May 25 11:22:04.383: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:06.387: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:08.393: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:10.397: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:12.402: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:14.407: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:16.412: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:18.417: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:20.422: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:22.428: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:24.433: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:26.585: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:28.589: INFO: Pod: liveness-exec, restart count:0 May 25 11:22:30.594: INFO: Pod: liveness-exec, restart count:1 May 25 11:22:30.594: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:22:30.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9676" for this suite. • [SLOW TEST:71.835 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:22:02.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a in namespace container-probe-8451 May 25 11:22:04.414: INFO: Started pod busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a in namespace container-probe-8451 May 25 11:22:04.414: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (1.103µs elapsed) May 25 11:22:06.415: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (2.001115052s elapsed) May 25 11:22:08.416: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (4.002615236s elapsed) May 25 11:22:10.417: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (6.003307762s elapsed) May 25 11:22:12.417: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (8.003569334s elapsed) May 25 11:22:14.417: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (10.003873067s elapsed) May 25 11:22:16.419: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (12.005103085s elapsed) May 25 11:22:18.420: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (14.006735628s elapsed) May 25 11:22:20.421: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (16.007072737s elapsed) May 25 11:22:22.421: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (18.007804964s elapsed) May 25 11:22:24.422: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (20.008427776s elapsed) May 25 11:22:26.423: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (22.009704488s elapsed) May 25 11:22:28.424: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (24.010482167s elapsed) May 25 11:22:30.425: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (26.011719529s elapsed) May 25 11:22:32.426: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (28.012201194s elapsed) May 25 11:22:34.426: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (30.01244753s elapsed) May 25 11:22:36.427: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (32.013650697s elapsed) May 25 11:22:38.428: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (34.01452183s elapsed) May 25 11:22:40.429: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (36.015744698s elapsed) May 25 11:22:42.430: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (38.016006076s elapsed) May 25 11:22:44.430: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (40.016311315s elapsed) May 25 11:22:46.431: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (42.01750134s elapsed) May 25 11:22:48.432: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (44.018169549s elapsed) May 25 11:22:50.433: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (46.019211597s elapsed) May 25 11:22:52.433: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (48.019486666s elapsed) May 25 11:22:54.433: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (50.019740516s elapsed) May 25 11:22:56.436: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (52.021923046s elapsed) May 25 11:22:58.436: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (54.022836978s elapsed) May 25 11:23:00.438: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (56.024070287s elapsed) May 25 11:23:02.438: INFO: pod container-probe-8451/busybox-5a6e3f59-bec8-493f-9186-3fda3cf68e1a is not ready (58.024344855s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:23:05.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8451" for this suite. • [SLOW TEST:64.347 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":925,"failed":0} May 25 11:23:06.578: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:21:48.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay May 25 11:21:51.642: INFO: watch delete seen for pod-submit-status-2-0 May 25 11:21:51.642: INFO: Pod pod-submit-status-2-0 on node v1.21-worker timings total=3.545731403s t=1.187s run=1s execute=0s May 25 11:21:52.441: INFO: watch delete seen for pod-submit-status-0-0 May 25 11:21:52.441: INFO: Pod pod-submit-status-0-0 on node v1.21-worker timings total=4.345458456s t=1.988s run=1s execute=0s May 25 11:21:54.842: INFO: watch delete seen for pod-submit-status-1-0 May 25 11:21:54.842: INFO: Pod pod-submit-status-1-0 on node v1.21-worker timings total=6.745702874s t=222ms run=0s execute=0s May 25 11:22:05.048: INFO: watch delete seen for pod-submit-status-0-1 May 25 11:22:05.048: INFO: Pod pod-submit-status-0-1 on node v1.21-worker timings total=12.606953122s t=184ms run=0s execute=0s May 25 11:22:05.058: INFO: watch delete seen for pod-submit-status-2-1 May 25 11:22:05.058: INFO: Pod pod-submit-status-2-1 on node v1.21-worker timings total=13.416261699s t=1.347s run=1s execute=0s May 25 11:22:05.069: INFO: watch delete seen for pod-submit-status-1-1 May 25 11:22:05.069: INFO: Pod pod-submit-status-1-1 on node v1.21-worker timings total=10.227524441s t=1.493s run=1s execute=0s May 25 11:22:15.452: INFO: watch delete seen for pod-submit-status-0-2 May 25 11:22:15.452: INFO: Pod pod-submit-status-0-2 on node v1.21-worker2 timings total=10.403425729s t=1.094s run=1s execute=0s May 25 11:22:15.463: INFO: watch delete seen for pod-submit-status-1-2 May 25 11:22:15.463: INFO: Pod pod-submit-status-1-2 on node v1.21-worker2 timings total=10.393763927s t=1.206s run=1s execute=0s May 25 11:22:15.475: INFO: watch delete seen for pod-submit-status-2-2 May 25 11:22:15.475: INFO: Pod pod-submit-status-2-2 on node v1.21-worker2 timings total=10.417031722s t=1.086s run=1s execute=0s May 25 11:22:25.052: INFO: watch delete seen for pod-submit-status-1-3 May 25 11:22:25.052: INFO: Pod pod-submit-status-1-3 on node v1.21-worker timings total=9.588760874s t=1.254s run=1s execute=0s May 25 11:22:25.063: INFO: watch delete seen for pod-submit-status-2-3 May 25 11:22:25.063: INFO: Pod pod-submit-status-2-3 on node v1.21-worker timings total=9.587956839s t=116ms run=0s execute=0s May 25 11:22:25.119: INFO: watch delete seen for pod-submit-status-0-3 May 25 11:22:25.119: INFO: Pod pod-submit-status-0-3 on node v1.21-worker timings total=9.666908752s t=259ms run=0s execute=0s May 25 11:22:35.046: INFO: watch delete seen for pod-submit-status-0-4 May 25 11:22:35.046: INFO: Pod pod-submit-status-0-4 on node v1.21-worker timings total=9.926900191s t=573ms run=0s execute=0s May 25 11:22:35.057: INFO: watch delete seen for pod-submit-status-1-4 May 25 11:22:35.057: INFO: Pod pod-submit-status-1-4 on node v1.21-worker timings total=10.005474576s t=1.422s run=1s execute=0s May 25 11:22:35.104: INFO: watch delete seen for pod-submit-status-2-4 May 25 11:22:35.104: INFO: Pod pod-submit-status-2-4 on node v1.21-worker timings total=10.040737153s t=1.696s run=1s execute=0s May 25 11:22:45.049: INFO: watch delete seen for pod-submit-status-1-5 May 25 11:22:45.049: INFO: Pod pod-submit-status-1-5 on node v1.21-worker timings total=9.99189069s t=126ms run=0s execute=0s May 25 11:22:45.061: INFO: watch delete seen for pod-submit-status-0-5 May 25 11:22:45.061: INFO: Pod pod-submit-status-0-5 on node v1.21-worker timings total=10.015002822s t=1.216s run=1s execute=0s May 25 11:22:45.120: INFO: watch delete seen for pod-submit-status-2-5 May 25 11:22:45.120: INFO: Pod pod-submit-status-2-5 on node v1.21-worker timings total=10.016204583s t=1.676s run=1s execute=0s May 25 11:22:48.496: INFO: watch delete seen for pod-submit-status-0-6 May 25 11:22:48.497: INFO: Pod pod-submit-status-0-6 on node v1.21-worker2 timings total=3.435462046s t=383ms run=0s execute=0s May 25 11:22:55.050: INFO: watch delete seen for pod-submit-status-1-6 May 25 11:22:55.050: INFO: Pod pod-submit-status-1-6 on node v1.21-worker timings total=10.000552294s t=1.975s run=1s execute=0s May 25 11:22:55.060: INFO: watch delete seen for pod-submit-status-2-6 May 25 11:22:55.061: INFO: Pod pod-submit-status-2-6 on node v1.21-worker timings total=9.940304296s t=481ms run=0s execute=0s May 25 11:22:55.448: INFO: watch delete seen for pod-submit-status-0-7 May 25 11:22:55.448: INFO: Pod pod-submit-status-0-7 on node v1.21-worker2 timings total=6.950968607s t=1.831s run=1s execute=0s May 25 11:22:57.031: INFO: watch delete seen for pod-submit-status-0-8 May 25 11:22:57.031: INFO: Pod pod-submit-status-0-8 on node v1.21-worker2 timings total=1.583774039s t=1.564s run=1s execute=0s May 25 11:23:06.382: INFO: watch delete seen for pod-submit-status-1-7 May 25 11:23:06.382: INFO: Pod pod-submit-status-1-7 on node v1.21-worker timings total=11.331977056s t=1.663s run=0s execute=0s May 25 11:23:06.884: INFO: watch delete seen for pod-submit-status-2-7 May 25 11:23:06.884: INFO: Pod pod-submit-status-2-7 on node v1.21-worker2 timings total=11.823708899s t=173ms run=0s execute=0s May 25 11:23:15.051: INFO: watch delete seen for pod-submit-status-2-8 May 25 11:23:15.051: INFO: Pod pod-submit-status-2-8 on node v1.21-worker timings total=8.166665348s t=1.136s run=0s execute=0s May 25 11:23:15.448: INFO: watch delete seen for pod-submit-status-0-9 May 25 11:23:15.448: INFO: Pod pod-submit-status-0-9 on node v1.21-worker2 timings total=18.416783844s t=1.312s run=0s execute=0s May 25 11:23:16.919: INFO: watch delete seen for pod-submit-status-0-10 May 25 11:23:16.919: INFO: Pod pod-submit-status-0-10 on node v1.21-worker timings total=1.470943186s t=1.457s run=1s execute=0s May 25 11:23:17.052: INFO: watch delete seen for pod-submit-status-2-9 May 25 11:23:17.052: INFO: Pod pod-submit-status-2-9 on node v1.21-worker timings total=2.000921569s t=1.985s run=0s execute=0s May 25 11:23:18.161: INFO: watch delete seen for pod-submit-status-0-11 May 25 11:23:18.161: INFO: Pod pod-submit-status-0-11 on node v1.21-worker timings total=1.241772496s t=1.226s run=1s execute=0s May 25 11:23:20.102: INFO: watch delete seen for pod-submit-status-0-12 May 25 11:23:20.102: INFO: Pod pod-submit-status-0-12 on node v1.21-worker2 timings total=1.941087746s t=1.924s run=1s execute=0s May 25 11:23:21.972: INFO: watch delete seen for pod-submit-status-0-13 May 25 11:23:21.972: INFO: Pod pod-submit-status-0-13 on node v1.21-worker2 timings total=1.869195982s t=1.853s run=1s execute=0s May 25 11:23:25.047: INFO: watch delete seen for pod-submit-status-1-8 May 25 11:23:25.047: INFO: Pod pod-submit-status-1-8 on node v1.21-worker timings total=18.664599695s t=477ms run=0s execute=0s May 25 11:23:25.449: INFO: watch delete seen for pod-submit-status-2-10 May 25 11:23:25.449: INFO: Pod pod-submit-status-2-10 on node v1.21-worker2 timings total=8.396860589s t=918ms run=0s execute=0s May 25 11:23:25.642: INFO: watch delete seen for pod-submit-status-2-11 May 25 11:23:25.642: INFO: Pod pod-submit-status-2-11 on node v1.21-worker timings total=192.888962ms t=8ms run=0s execute=0s May 25 11:23:27.490: INFO: watch delete seen for pod-submit-status-2-12 May 25 11:23:27.490: INFO: Pod pod-submit-status-2-12 on node v1.21-worker2 timings total=1.847897138s t=1.831s run=1s execute=0s May 25 11:23:29.066: INFO: watch delete seen for pod-submit-status-2-13 May 25 11:23:29.066: INFO: Pod pod-submit-status-2-13 on node v1.21-worker timings total=1.576007107s t=1.56s run=1s execute=0s May 25 11:23:30.943: INFO: watch delete seen for pod-submit-status-2-14 May 25 11:23:30.943: INFO: Pod pod-submit-status-2-14 on node v1.21-worker2 timings total=1.877194712s t=1.862s run=0s execute=0s May 25 11:23:35.047: INFO: watch delete seen for pod-submit-status-1-9 May 25 11:23:35.047: INFO: Pod pod-submit-status-1-9 on node v1.21-worker timings total=10.000397961s t=168ms run=0s execute=0s May 25 11:23:35.100: INFO: watch delete seen for pod-submit-status-1-10 May 25 11:23:35.100: INFO: Pod pod-submit-status-1-10 on node v1.21-worker timings total=52.541114ms t=2ms run=0s execute=0s May 25 11:23:35.448: INFO: watch delete seen for pod-submit-status-0-14 May 25 11:23:35.448: INFO: Pod pod-submit-status-0-14 on node v1.21-worker2 timings total=13.475807169s t=643ms run=1s execute=0s May 25 11:23:45.049: INFO: watch delete seen for pod-submit-status-1-11 May 25 11:23:45.049: INFO: Pod pod-submit-status-1-11 on node v1.21-worker timings total=9.948757275s t=139ms run=0s execute=0s May 25 11:23:46.885: INFO: watch delete seen for pod-submit-status-1-12 May 25 11:23:46.885: INFO: Pod pod-submit-status-1-12 on node v1.21-worker2 timings total=1.836372351s t=1.815s run=0s execute=0s May 25 11:23:55.049: INFO: watch delete seen for pod-submit-status-1-13 May 25 11:23:55.049: INFO: Pod pod-submit-status-1-13 on node v1.21-worker timings total=8.163872094s t=209ms run=0s execute=0s May 25 11:24:05.450: INFO: watch delete seen for pod-submit-status-1-14 May 25 11:24:05.450: INFO: Pod pod-submit-status-1-14 on node v1.21-worker2 timings total=10.400594612s t=641ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:24:05.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1914" for this suite. • [SLOW TEST:137.392 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":6,"skipped":619,"failed":0} May 25 11:24:05.461: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:27.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-a7e6084e-b540-4265-b3ee-00304493a83d in namespace container-probe-7289 May 25 11:20:33.281: INFO: Started pod startup-a7e6084e-b540-4265-b3ee-00304493a83d in namespace container-probe-7289 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:20:33.483: INFO: Initial restart count of pod startup-a7e6084e-b540-4265-b3ee-00304493a83d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:24:34.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7289" for this suite. • [SLOW TEST:247.966 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":162,"failed":0} May 25 11:24:34.987: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:54.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-0a1bfe60-bd9f-4507-b380-a1dfa30ca233 in namespace container-probe-4378 May 25 11:21:02.373: INFO: Started pod liveness-0a1bfe60-bd9f-4507-b380-a1dfa30ca233 in namespace container-probe-4378 STEP: checking the pod's current state and verifying that restartCount is present May 25 11:21:02.377: INFO: Initial restart count of pod liveness-0a1bfe60-bd9f-4507-b380-a1dfa30ca233 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:25:02.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4378" for this suite. • [SLOW TEST:248.443 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":6,"skipped":472,"failed":0} May 25 11:25:02.776: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:29.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 May 25 11:20:29.224: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:31.290: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:33.281: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:35.229: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:37.228: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 May 25 11:22:25.685: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-05-25 11:21:37 +0000 UTC restartedAt=2021-05-25 11:22:24 +0000 UTC (47s) STEP: getting restart delay-1 May 25 11:24:03.432: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-05-25 11:22:29 +0000 UTC restartedAt=2021-05-25 11:24:02 +0000 UTC (1m33s) STEP: getting restart delay-2 May 25 11:26:52.739: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-05-25 11:24:07 +0000 UTC restartedAt=2021-05-25 11:26:52 +0000 UTC (2m45s) STEP: updating the image May 25 11:26:53.249: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 25 11:27:15.317: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-25 11:27:02 +0000 UTC restartedAt=2021-05-25 11:27:14 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:27:15.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4433" for this suite. • [SLOW TEST:406.146 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":274,"failed":0} May 25 11:27:15.329: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:20:50.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 May 25 11:20:50.922: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 25 11:20:52.926: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped May 25 11:32:32.453: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-05-25 11:27:19 +0000 UTC restartedAt=2021-05-25 11:32:31 +0000 UTC (5m12s) May 25 11:37:44.404: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-05-25 11:32:36 +0000 UTC restartedAt=2021-05-25 11:37:43 +0000 UTC (5m7s) May 25 11:42:54.882: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-05-25 11:37:48 +0000 UTC restartedAt=2021-05-25 11:42:54 +0000 UTC (5m6s) STEP: getting restart delay after a capped delay May 25 11:48:03.691: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-05-25 11:42:59 +0000 UTC restartedAt=2021-05-25 11:48:02 +0000 UTC (5m3s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:48:03.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8015" for this suite. • [SLOW TEST:1632.823 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":5,"skipped":575,"failed":0} May 25 11:48:03.703: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":3,"skipped":564,"failed":0} May 25 11:22:30.606: INFO: Running AfterSuite actions on all nodes May 25 11:48:03.742: INFO: Running AfterSuite actions on node 1 May 25 11:48:03.742: INFO: Skipping dumping logs from cluster Ran 51 of 5771 Specs in 1665.475 seconds SUCCESS! -- 51 Passed | 0 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m47.320310225s Test Suite Passed