Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621937292 - Will randomize all specs Will run 5771 specs Running in parallel across 10 nodes May 25 10:08:14.045: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.049: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 10:08:14.077: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 10:08:14.129: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 10:08:14.129: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 10:08:14.130: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 10:08:14.143: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 25 10:08:14.143: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 10:08:14.143: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 25 10:08:14.143: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 10:08:14.143: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 25 10:08:14.143: INFO: e2e test version: v1.21.1 May 25 10:08:14.144: INFO: kube-apiserver version: v1.21.1 May 25 10:08:14.145: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.151: INFO: Cluster IP family: ipv4 S ------------------------------ May 25 10:08:14.146: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.166: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 25 10:08:14.150: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.172: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ May 25 10:08:14.162: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.183: INFO: Cluster IP family: ipv4 May 25 10:08:14.162: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.184: INFO: Cluster IP family: ipv4 May 25 10:08:14.163: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.184: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ May 25 10:08:14.167: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.189: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 25 10:08:14.180: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.203: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 25 10:08:14.183: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.205: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 25 10:08:14.195: INFO: >>> kubeConfig: /root/.kube/config May 25 10:08:14.215: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate W0525 10:08:14.266352 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.266: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.268: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates May 25 10:08:14.273: INFO: created test-podtemplate-1 May 25 10:08:14.276: INFO: created test-podtemplate-2 May 25 10:08:14.279: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 25 10:08:14.281: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 25 10:08:14.291: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:14.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5880" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":1,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0525 10:08:14.230643 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.230: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.234: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod May 25 10:08:14.247: INFO: The status of Pod pod-hostip-6ff8b0bc-f2a2-47ed-a23e-953a0e947bb3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:16.254: INFO: The status of Pod pod-hostip-6ff8b0bc-f2a2-47ed-a23e-953a0e947bb3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:18.251: INFO: The status of Pod pod-hostip-6ff8b0bc-f2a2-47ed-a23e-953a0e947bb3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:20.251: INFO: The status of Pod pod-hostip-6ff8b0bc-f2a2-47ed-a23e-953a0e947bb3 is Running (Ready = true) May 25 10:08:20.257: INFO: Pod pod-hostip-6ff8b0bc-f2a2-47ed-a23e-953a0e947bb3 has hostIP: 172.18.0.2 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:20.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-101" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:20.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-9e42801e-b723-425a-bd17-a08a7e059d77 STEP: Creating a pod to test consume configMaps May 25 10:08:20.366: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21" in namespace "projected-844" to be "Succeeded or Failed" May 25 10:08:20.369: INFO: Pod "pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852103ms May 25 10:08:22.374: INFO: Pod "pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007117641s STEP: Saw pod success May 25 10:08:22.374: INFO: Pod "pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21" satisfied condition "Succeeded or Failed" May 25 10:08:22.377: INFO: Trying to get logs from node v1.21-worker2 pod pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21 container agnhost-container: STEP: delete the pod May 25 10:08:22.407: INFO: Waiting for pod pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21 to disappear May 25 10:08:22.410: INFO: Pod pod-projected-configmaps-7b42fc37-c84b-4248-ac1c-f8cd43d21f21 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-844" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W0525 10:08:14.228008 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.228: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.231: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs May 25 10:08:14.244: INFO: Waiting up to 5m0s for pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf" in namespace "emptydir-2234" to be "Succeeded or Failed" May 25 10:08:14.247: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780397ms May 25 10:08:16.255: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010946098s May 25 10:08:18.259: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015086008s May 25 10:08:20.263: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018866318s May 25 10:08:22.266: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021937556s May 25 10:08:24.271: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02666583s STEP: Saw pod success May 25 10:08:24.271: INFO: Pod "pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf" satisfied condition "Succeeded or Failed" May 25 10:08:24.274: INFO: Trying to get logs from node v1.21-worker pod pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf container test-container: STEP: delete the pod May 25 10:08:24.884: INFO: Waiting for pod pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf to disappear May 25 10:08:24.888: INFO: Pod pod-ce87242a-98c1-4cf5-b2bd-95bf6e4251cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:24.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2234" for this suite. • [SLOW TEST:10.690 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:22.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath May 25 10:08:22.514: INFO: Waiting up to 5m0s for pod "var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa" in namespace "var-expansion-4167" to be "Succeeded or Failed" May 25 10:08:22.517: INFO: Pod "var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054194ms May 25 10:08:24.584: INFO: Pod "var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0702184s STEP: Saw pod success May 25 10:08:24.584: INFO: Pod "var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa" satisfied condition "Succeeded or Failed" May 25 10:08:24.588: INFO: Trying to get logs from node v1.21-worker2 pod var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa container dapi-container: STEP: delete the pod May 25 10:08:24.889: INFO: Waiting for pod var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa to disappear May 25 10:08:24.892: INFO: Pod var-expansion-7f5ba966-6c1e-4241-a013-e943b1d708fa no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:24.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4167" for this suite. • ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0525 10:08:14.269678 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.269: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.272: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:14.278: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59" in namespace "security-context-test-5977" to be "Succeeded or Failed" May 25 10:08:14.281: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168886ms May 25 10:08:16.285: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006395576s May 25 10:08:18.290: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011111472s May 25 10:08:20.294: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015918998s May 25 10:08:22.299: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020482905s May 25 10:08:24.583: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304848778s May 25 10:08:26.782: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.503154108s May 25 10:08:26.782: INFO: Pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59" satisfied condition "Succeeded or Failed" May 25 10:08:26.883: INFO: Got logs for pod "busybox-privileged-false-f85b28f2-14ec-4b95-8d13-75e869cbec59": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:26.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5977" for this suite. • [SLOW TEST:12.841 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:27.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:27.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8132" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:08:14.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff" in namespace "downward-api-2522" to be "Succeeded or Failed" May 25 10:08:14.397: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553297ms May 25 10:08:16.401: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006251109s May 25 10:08:18.406: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011045697s May 25 10:08:20.411: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015798951s May 25 10:08:22.415: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02002813s May 25 10:08:24.584: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188829912s May 25 10:08:26.782: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.387240465s May 25 10:08:28.787: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.391898408s STEP: Saw pod success May 25 10:08:28.787: INFO: Pod "downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff" satisfied condition "Succeeded or Failed" May 25 10:08:28.790: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff container client-container: STEP: delete the pod May 25 10:08:28.806: INFO: Waiting for pod downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff to disappear May 25 10:08:28.809: INFO: Pod downwardapi-volume-1f5478f9-e513-4a94-ab24-53c3a5b09dff no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:28.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2522" for this suite. • [SLOW TEST:14.456 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0525 10:08:14.256951 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.257: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.259: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-676471bd-d8b3-48c0-801d-8f05c3d5ef2e STEP: Creating a pod to test consume configMaps May 25 10:08:14.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5" in namespace "projected-5098" to be "Succeeded or Failed" May 25 10:08:14.271: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.868613ms May 25 10:08:16.275: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005624066s May 25 10:08:18.279: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009834141s May 25 10:08:20.283: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013941073s May 25 10:08:22.288: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018670618s May 25 10:08:24.584: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314688484s May 25 10:08:26.781: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.512354206s May 25 10:08:28.787: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.517534773s STEP: Saw pod success May 25 10:08:28.787: INFO: Pod "pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5" satisfied condition "Succeeded or Failed" May 25 10:08:28.791: INFO: Trying to get logs from node v1.21-worker pod pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5 container agnhost-container: STEP: delete the pod May 25 10:08:28.806: INFO: Waiting for pod pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5 to disappear May 25 10:08:28.809: INFO: Pod pod-projected-configmaps-9c079817-8117-4640-8994-d7939f63ebb5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:28.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5098" for this suite. • [SLOW TEST:14.588 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts W0525 10:08:14.252548 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.252: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.255: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: May 25 10:08:14.261: INFO: Waiting up to 5m0s for pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e" in namespace "svcaccounts-8000" to be "Succeeded or Failed" May 25 10:08:14.263: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.60093ms May 25 10:08:16.267: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005755958s May 25 10:08:18.271: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010285818s May 25 10:08:20.275: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014383787s May 25 10:08:22.279: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018009185s May 25 10:08:24.584: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.322549103s May 25 10:08:26.782: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Running", Reason="", readiness=true. Elapsed: 12.520618634s May 25 10:08:28.787: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.525779596s STEP: Saw pod success May 25 10:08:28.787: INFO: Pod "test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e" satisfied condition "Succeeded or Failed" May 25 10:08:28.791: INFO: Trying to get logs from node v1.21-worker pod test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e container agnhost-container: STEP: delete the pod May 25 10:08:28.807: INFO: Waiting for pod test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e to disappear May 25 10:08:28.809: INFO: Pod test-pod-fe492a86-c32b-4ff8-b2f4-c0faeed9203e no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8000" for this suite. • [SLOW TEST:14.594 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":67,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} S ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":1,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:28.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0525 10:08:28.969245 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching May 25 10:08:28.980: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 25 10:08:28.984: INFO: starting watch STEP: patching STEP: updating May 25 10:08:29.025: INFO: waiting for watch events with expected annotations May 25 10:08:29.025: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:29.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1459" for this suite. • ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:27.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-58c6b76b-2d9d-4a46-ae8c-c62b6394cd2e STEP: Creating a pod to test consume configMaps May 25 10:08:27.633: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432" in namespace "configmap-8742" to be "Succeeded or Failed" May 25 10:08:27.635: INFO: Pod "pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346621ms May 25 10:08:29.640: INFO: Pod "pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00667516s STEP: Saw pod success May 25 10:08:29.640: INFO: Pod "pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432" satisfied condition "Succeeded or Failed" May 25 10:08:29.643: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432 container agnhost-container: STEP: delete the pod May 25 10:08:29.658: INFO: Waiting for pod pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432 to disappear May 25 10:08:29.660: INFO: Pod pod-configmaps-1b1664a5-ee39-4350-bce9-cd05f7332432 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:29.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8742" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:28.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args May 25 10:08:28.880: INFO: Waiting up to 5m0s for pod "var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0" in namespace "var-expansion-6359" to be "Succeeded or Failed" May 25 10:08:28.883: INFO: Pod "var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955631ms May 25 10:08:30.887: INFO: Pod "var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006633404s STEP: Saw pod success May 25 10:08:30.887: INFO: Pod "var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0" satisfied condition "Succeeded or Failed" May 25 10:08:30.890: INFO: Trying to get logs from node v1.21-worker2 pod var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0 container dapi-container: STEP: delete the pod May 25 10:08:30.905: INFO: Waiting for pod var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0 to disappear May 25 10:08:30.908: INFO: Pod var-expansion-f0bd19f6-2ef9-4286-96d7-30fcdbd7f5c0 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:30.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6359" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0525 10:08:14.248077 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.248: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.252: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-5686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5686 to expose endpoints map[] May 25 10:08:14.261: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found May 25 10:08:15.274: INFO: successfully validated that service multi-endpoint-test in namespace services-5686 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5686 May 25 10:08:15.284: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:17.289: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:19.289: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5686 to expose endpoints map[pod1:[100]] May 25 10:08:19.300: INFO: successfully validated that service multi-endpoint-test in namespace services-5686 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-5686 May 25 10:08:19.311: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:21.316: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:23.315: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:25.679: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:27.479: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:29.314: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:31.314: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5686 to expose endpoints map[pod1:[100] pod2:[101]] May 25 10:08:31.331: INFO: successfully validated that service multi-endpoint-test in namespace services-5686 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-5686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5686 to expose endpoints map[pod2:[101]] May 25 10:08:31.348: INFO: successfully validated that service multi-endpoint-test in namespace services-5686 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-5686 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5686 to expose endpoints map[] May 25 10:08:31.361: INFO: successfully validated that service multi-endpoint-test in namespace services-5686 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:31.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5686" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.164 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:31.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:31.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7529" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:24.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:24.948: INFO: The status of Pod busybox-scheduling-409e2184-e6a3-4265-b6af-25aa9953aad4 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:27.082: INFO: The status of Pod busybox-scheduling-409e2184-e6a3-4265-b6af-25aa9953aad4 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:28.951: INFO: The status of Pod busybox-scheduling-409e2184-e6a3-4265-b6af-25aa9953aad4 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:30.952: INFO: The status of Pod busybox-scheduling-409e2184-e6a3-4265-b6af-25aa9953aad4 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:32.951: INFO: The status of Pod busybox-scheduling-409e2184-e6a3-4265-b6af-25aa9953aad4 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:32.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9696" for this suite. • [SLOW TEST:8.072 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:28.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:28.863: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0" in namespace "security-context-test-1437" to be "Succeeded or Failed" May 25 10:08:28.866: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.877336ms May 25 10:08:30.870: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007377973s May 25 10:08:32.875: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011675073s May 25 10:08:34.878: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0": Phase="Running", Reason="", readiness=true. Elapsed: 6.015085686s May 25 10:08:36.883: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020399839s May 25 10:08:36.884: INFO: Pod "busybox-readonly-false-23301c77-6485-4442-9dc5-82b1fe432ae0" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:36.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1437" for this suite. • [SLOW TEST:8.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:33.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:08:33.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8" in namespace "downward-api-8158" to be "Succeeded or Failed" May 25 10:08:33.083: INFO: Pod "downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769262ms May 25 10:08:35.087: INFO: Pod "downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006559153s May 25 10:08:37.091: INFO: Pod "downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010723153s STEP: Saw pod success May 25 10:08:37.091: INFO: Pod "downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8" satisfied condition "Succeeded or Failed" May 25 10:08:37.094: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8 container client-container: STEP: delete the pod May 25 10:08:37.107: INFO: Waiting for pod downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8 to disappear May 25 10:08:37.109: INFO: Pod downwardapi-volume-04f1eaff-b52f-4ed0-a83c-76a6158af7a8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:37.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8158" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:29.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-7dff4b9e-bf1f-4818-aba6-5446cf12ed16 STEP: Creating a pod to test consume secrets May 25 10:08:29.814: INFO: Waiting up to 5m0s for pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782" in namespace "secrets-1544" to be "Succeeded or Failed" May 25 10:08:29.816: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453163ms May 25 10:08:31.821: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007466255s May 25 10:08:33.825: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01127374s May 25 10:08:35.830: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016037774s May 25 10:08:37.835: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020802917s STEP: Saw pod success May 25 10:08:37.835: INFO: Pod "pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782" satisfied condition "Succeeded or Failed" May 25 10:08:37.838: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782 container secret-volume-test: STEP: delete the pod May 25 10:08:37.854: INFO: Waiting for pod pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782 to disappear May 25 10:08:37.857: INFO: Pod pod-secrets-d4d73a11-aafb-44dc-9037-9ca835e37782 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:37.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1544" for this suite. • [SLOW TEST:8.098 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":113,"failed":0} S ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":2,"skipped":69,"failed":0} [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:29.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 25 10:08:37.113: INFO: &Pod{ObjectMeta:{send-events-4780b14c-6e57-45dc-9fb1-75e6b0d0efc5 events-865 82154e82-fc35-48a5-a015-049a3659b810 489987 0 2021-05-25 10:08:29 +0000 UTC map[name:foo time:95170911] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.224" ], "mac": "26:61:30:0b:f2:d7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.224" ], "mac": "26:61:30:0b:f2:d7", "default": true, "dns": {} }]] [] [] [{e2e.test Update v1 2021-05-25 10:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.224\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fbcg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbcg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:08:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:08:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:08:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:08:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.224,StartTime:2021-05-25 10:08:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:08:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://2eeac74dbae0bd29a22d5171ed788548cfa354b9d4769fe96af5b94a96610f8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 25 10:08:39.280: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 25 10:08:41.285: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:41.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-865" for this suite. • [SLOW TEST:12.234 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":3,"skipped":69,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:37.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-36695f76-eaa7-402d-9cb7-3524335f6aac STEP: Creating a pod to test consume configMaps May 25 10:08:37.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030" in namespace "projected-5422" to be "Succeeded or Failed" May 25 10:08:37.163: INFO: Pod "pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763766ms May 25 10:08:39.279: INFO: Pod "pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118991964s May 25 10:08:41.284: INFO: Pod "pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123969626s STEP: Saw pod success May 25 10:08:41.285: INFO: Pod "pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030" satisfied condition "Succeeded or Failed" May 25 10:08:41.288: INFO: Trying to get logs from node v1.21-worker pod pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030 container projected-configmap-volume-test: STEP: delete the pod May 25 10:08:41.303: INFO: Waiting for pod pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030 to disappear May 25 10:08:41.305: INFO: Pod pod-projected-configmaps-61765d11-1689-415c-babf-0c9be23a7030 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:41.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5422" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:41.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:41.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1647" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":5,"skipped":104,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:41.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8176.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8176.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8176.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8176.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8176.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8176.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:08:45.416: INFO: DNS probes using dns-8176/dns-test-9035dd24-cd53-45c6-a81f-f7397602f107 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:45.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8176" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":83,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:45.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-37090a76-9a30-47d7-826e-f002f716a859 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:45.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7195" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":5,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:36.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 May 25 10:08:36.979: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. May 25 10:08:37.463: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created May 25 10:08:39.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:08:41.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534117, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:08:44.937: INFO: Waited 1.024869324s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices May 25 10:08:45.182: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:45.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7723" for this suite. • [SLOW TEST:9.128 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":4,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:37.912: INFO: The status of Pod server-envvars-0960dd1a-bf3b-4e0e-8e43-8b46fcc96669 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:39.916: INFO: The status of Pod server-envvars-0960dd1a-bf3b-4e0e-8e43-8b46fcc96669 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:41.917: INFO: The status of Pod server-envvars-0960dd1a-bf3b-4e0e-8e43-8b46fcc96669 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:43.916: INFO: The status of Pod server-envvars-0960dd1a-bf3b-4e0e-8e43-8b46fcc96669 is Running (Ready = true) May 25 10:08:43.931: INFO: Waiting up to 5m0s for pod "client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee" in namespace "pods-2020" to be "Succeeded or Failed" May 25 10:08:43.934: INFO: Pod "client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485203ms May 25 10:08:45.938: INFO: Pod "client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007333719s May 25 10:08:47.943: INFO: Pod "client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01194164s STEP: Saw pod success May 25 10:08:47.943: INFO: Pod "client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee" satisfied condition "Succeeded or Failed" May 25 10:08:47.947: INFO: Trying to get logs from node v1.21-worker pod client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee container env3cont: STEP: delete the pod May 25 10:08:47.963: INFO: Waiting for pod client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee to disappear May 25 10:08:47.967: INFO: Pod client-envvars-ec1e5e1d-5184-4181-baff-a4220b0fd5ee no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:47.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2020" for this suite. • [SLOW TEST:10.106 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":114,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:31.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6313 STEP: creating service affinity-nodeport in namespace services-6313 STEP: creating replication controller affinity-nodeport in namespace services-6313 I0525 10:08:31.594021 22 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6313, replica count: 3 I0525 10:08:34.645692 22 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:37.646467 22 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:08:37.657: INFO: Creating new exec pod May 25 10:08:40.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-6313 exec execpod-affinitytlhdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' May 25 10:08:41.042: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 25 10:08:41.042: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:41.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-6313 exec execpod-affinitytlhdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.78.79 80' May 25 10:08:41.297: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.78.79 80\nConnection to 10.96.78.79 80 port [tcp/http] succeeded!\n" May 25 10:08:41.297: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:41.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-6313 exec execpod-affinitytlhdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32430' May 25 10:08:41.501: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32430\nConnection to 172.18.0.4 32430 port [tcp/*] succeeded!\n" May 25 10:08:41.501: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:41.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-6313 exec execpod-affinitytlhdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.2 32430' May 25 10:08:41.742: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.2 32430\nConnection to 172.18.0.2 32430 port [tcp/*] succeeded!\n" May 25 10:08:41.742: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:41.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-6313 exec execpod-affinitytlhdl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:32430/ ; done' May 25 10:08:42.101: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32430/\n" May 25 10:08:42.101: INFO: stdout: "\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w\naffinity-nodeport-s7t2w" May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Received response from host: affinity-nodeport-s7t2w May 25 10:08:42.101: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6313, will wait for the garbage collector to delete the pods May 25 10:08:42.167: INFO: Deleting ReplicationController affinity-nodeport took: 4.29439ms May 25 10:08:42.268: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.040478ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:55.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6313" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.935 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":84,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:46.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 25 10:08:46.155: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:48.159: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:50.161: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 25 10:08:50.173: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:52.177: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 25 10:08:52.185: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 10:08:52.188: INFO: Pod pod-with-prestop-http-hook still exists May 25 10:08:54.188: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 10:08:54.193: INFO: Pod pod-with-prestop-http-hook still exists May 25 10:08:56.189: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 10:08:56.192: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-120" for this suite. • [SLOW TEST:10.094 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:47.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:48.032: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 10:08:52.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 --namespace=crd-publish-openapi-4643 create -f -' May 25 10:08:52.479: INFO: stderr: "" May 25 10:08:52.479: INFO: stdout: "e2e-test-crd-publish-openapi-810-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 10:08:52.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-810-crds test-cr' May 25 10:08:52.608: INFO: stderr: "" May 25 10:08:52.608: INFO: stdout: "e2e-test-crd-publish-openapi-810-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 25 10:08:52.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 --namespace=crd-publish-openapi-4643 apply -f -' May 25 10:08:52.902: INFO: stderr: "" May 25 10:08:52.902: INFO: stdout: "e2e-test-crd-publish-openapi-810-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 10:08:52.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-810-crds test-cr' May 25 10:08:53.021: INFO: stderr: "" May 25 10:08:53.021: INFO: stdout: "e2e-test-crd-publish-openapi-810-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 10:08:53.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 explain e2e-test-crd-publish-openapi-810-crds' May 25 10:08:53.297: INFO: stderr: "" May 25 10:08:53.297: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-810-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:08:57.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4643" for this suite. • [SLOW TEST:9.356 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":6,"skipped":125,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:56.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 25 10:08:56.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3019 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 25 10:08:56.395: INFO: stderr: "" May 25 10:08:56.395: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 25 10:08:56.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3019 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' May 25 10:08:56.779: INFO: stderr: "" May 25 10:08:56.779: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 25 10:08:56.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3019 delete pods e2e-test-httpd-pod' May 25 10:09:05.055: INFO: stderr: "" May 25 10:09:05.055: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:05.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3019" for this suite. • [SLOW TEST:8.824 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":6,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:05.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-dbd62527-d2d3-42a2-9055-d631c0bd0a2e STEP: Creating secret with name secret-projected-all-test-volume-66f00ce7-54e8-47b7-b4cc-5caface0d9c5 STEP: Creating a pod to test Check all projections for projected volume plugin May 25 10:09:05.144: INFO: Waiting up to 5m0s for pod "projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b" in namespace "projected-8707" to be "Succeeded or Failed" May 25 10:09:05.149: INFO: Pod "projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117821ms May 25 10:09:07.153: INFO: Pod "projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008566001s STEP: Saw pod success May 25 10:09:07.153: INFO: Pod "projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b" satisfied condition "Succeeded or Failed" May 25 10:09:07.156: INFO: Trying to get logs from node v1.21-worker2 pod projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b container projected-all-volume-test: STEP: delete the pod May 25 10:09:07.170: INFO: Waiting for pod projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b to disappear May 25 10:09:07.175: INFO: Pod projected-volume-76cb5ce3-95c3-4b3c-a549-7856e726fc6b no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:07.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8707" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":144,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:45.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-9a527075-79b2-4d91-b854-a0e5815c1100 in namespace container-probe-4202 May 25 10:08:49.562: INFO: Started pod liveness-9a527075-79b2-4d91-b854-a0e5815c1100 in namespace container-probe-4202 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:08:49.565: INFO: Initial restart count of pod liveness-9a527075-79b2-4d91-b854-a0e5815c1100 is 0 May 25 10:09:07.608: INFO: Restart count of pod container-probe-4202/liveness-9a527075-79b2-4d91-b854-a0e5815c1100 is now 1 (18.042289068s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:07.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4202" for this suite. • [SLOW TEST:22.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:57.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:08:57.401: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8033 I0525 10:08:57.423329 29 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8033, replica count: 1 I0525 10:08:58.475210 29 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:59.476302 29 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:09:00.476586 29 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:09:01.477765 29 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:09:01.586: INFO: Created: latency-svc-bn9x7 May 25 10:09:01.593: INFO: Got endpoints: latency-svc-bn9x7 [14.974752ms] May 25 10:09:01.600: INFO: Created: latency-svc-tkwtm May 25 10:09:01.603: INFO: Created: latency-svc-86fr2 May 25 10:09:01.604: INFO: Got endpoints: latency-svc-tkwtm [10.941625ms] May 25 10:09:01.606: INFO: Created: latency-svc-kfdhp May 25 10:09:01.607: INFO: Got endpoints: latency-svc-86fr2 [14.175325ms] May 25 10:09:01.609: INFO: Created: latency-svc-qqwvs May 25 10:09:01.609: INFO: Got endpoints: latency-svc-kfdhp [16.303998ms] May 25 10:09:01.611: INFO: Created: latency-svc-b9tg9 May 25 10:09:01.611: INFO: Got endpoints: latency-svc-qqwvs [18.716531ms] May 25 10:09:01.614: INFO: Created: latency-svc-xzfct May 25 10:09:01.615: INFO: Got endpoints: latency-svc-b9tg9 [21.392145ms] May 25 10:09:01.617: INFO: Created: latency-svc-8svh2 May 25 10:09:01.617: INFO: Got endpoints: latency-svc-xzfct [24.148425ms] May 25 10:09:01.620: INFO: Created: latency-svc-fgldh May 25 10:09:01.620: INFO: Got endpoints: latency-svc-8svh2 [26.59187ms] May 25 10:09:01.624: INFO: Got endpoints: latency-svc-fgldh [30.943109ms] May 25 10:09:01.625: INFO: Created: latency-svc-fn8l9 May 25 10:09:01.627: INFO: Created: latency-svc-ss57f May 25 10:09:01.628: INFO: Got endpoints: latency-svc-fn8l9 [35.459057ms] May 25 10:09:01.630: INFO: Created: latency-svc-bxnx2 May 25 10:09:01.630: INFO: Got endpoints: latency-svc-ss57f [37.273043ms] May 25 10:09:01.632: INFO: Created: latency-svc-24zn7 May 25 10:09:01.633: INFO: Got endpoints: latency-svc-bxnx2 [39.876516ms] May 25 10:09:01.639: INFO: Created: latency-svc-8kjd9 May 25 10:09:01.642: INFO: Got endpoints: latency-svc-24zn7 [48.542638ms] May 25 10:09:01.642: INFO: Got endpoints: latency-svc-8kjd9 [49.316326ms] May 25 10:09:01.643: INFO: Created: latency-svc-xmgrk May 25 10:09:01.645: INFO: Got endpoints: latency-svc-xmgrk [52.40386ms] May 25 10:09:01.645: INFO: Created: latency-svc-dt8w5 May 25 10:09:01.648: INFO: Created: latency-svc-6f4jc May 25 10:09:01.649: INFO: Got endpoints: latency-svc-dt8w5 [55.577137ms] May 25 10:09:01.650: INFO: Created: latency-svc-cfvsf May 25 10:09:01.650: INFO: Got endpoints: latency-svc-6f4jc [46.420945ms] May 25 10:09:01.652: INFO: Created: latency-svc-55l4g May 25 10:09:01.655: INFO: Got endpoints: latency-svc-cfvsf [47.689167ms] May 25 10:09:01.656: INFO: Created: latency-svc-8fz6m May 25 10:09:01.656: INFO: Got endpoints: latency-svc-55l4g [46.962929ms] May 25 10:09:01.658: INFO: Created: latency-svc-j2zdw May 25 10:09:01.658: INFO: Got endpoints: latency-svc-8fz6m [46.951571ms] May 25 10:09:01.660: INFO: Created: latency-svc-lthhj May 25 10:09:01.661: INFO: Got endpoints: latency-svc-j2zdw [46.098092ms] May 25 10:09:01.663: INFO: Created: latency-svc-7sfck May 25 10:09:01.663: INFO: Got endpoints: latency-svc-lthhj [45.528176ms] May 25 10:09:01.664: INFO: Created: latency-svc-tjd28 May 25 10:09:01.665: INFO: Got endpoints: latency-svc-7sfck [44.715493ms] May 25 10:09:01.667: INFO: Created: latency-svc-gcd5w May 25 10:09:01.667: INFO: Got endpoints: latency-svc-tjd28 [42.788844ms] May 25 10:09:01.669: INFO: Created: latency-svc-8sgbj May 25 10:09:01.669: INFO: Got endpoints: latency-svc-gcd5w [40.360891ms] May 25 10:09:01.671: INFO: Got endpoints: latency-svc-8sgbj [40.085146ms] May 25 10:09:01.671: INFO: Created: latency-svc-flrdl May 25 10:09:01.672: INFO: Created: latency-svc-mpjk7 May 25 10:09:01.674: INFO: Got endpoints: latency-svc-flrdl [40.557517ms] May 25 10:09:01.680: INFO: Got endpoints: latency-svc-mpjk7 [38.309514ms] May 25 10:09:01.680: INFO: Created: latency-svc-5bzxq May 25 10:09:01.683: INFO: Created: latency-svc-rv78k May 25 10:09:01.683: INFO: Got endpoints: latency-svc-5bzxq [40.708471ms] May 25 10:09:01.685: INFO: Created: latency-svc-pslk9 May 25 10:09:01.687: INFO: Got endpoints: latency-svc-rv78k [41.024827ms] May 25 10:09:01.687: INFO: Created: latency-svc-kmcgn May 25 10:09:01.687: INFO: Got endpoints: latency-svc-pslk9 [38.652311ms] May 25 10:09:01.689: INFO: Created: latency-svc-zwxh9 May 25 10:09:01.689: INFO: Got endpoints: latency-svc-kmcgn [38.89271ms] May 25 10:09:01.691: INFO: Created: latency-svc-nrgzq May 25 10:09:01.693: INFO: Created: latency-svc-cwdbw May 25 10:09:01.695: INFO: Created: latency-svc-pzbd7 May 25 10:09:01.697: INFO: Created: latency-svc-7mdm8 May 25 10:09:01.699: INFO: Created: latency-svc-m6g9l May 25 10:09:01.702: INFO: Created: latency-svc-t5hb9 May 25 10:09:01.704: INFO: Created: latency-svc-lwmmc May 25 10:09:01.706: INFO: Created: latency-svc-r4rj4 May 25 10:09:01.708: INFO: Created: latency-svc-r2wc7 May 25 10:09:01.711: INFO: Created: latency-svc-qqppc May 25 10:09:01.712: INFO: Created: latency-svc-qhmhf May 25 10:09:01.714: INFO: Created: latency-svc-5jcrz May 25 10:09:01.717: INFO: Created: latency-svc-rcr5c May 25 10:09:01.719: INFO: Created: latency-svc-vhbxn May 25 10:09:01.741: INFO: Got endpoints: latency-svc-zwxh9 [86.205246ms] May 25 10:09:01.747: INFO: Created: latency-svc-xv8hw May 25 10:09:01.792: INFO: Got endpoints: latency-svc-nrgzq [136.30789ms] May 25 10:09:01.804: INFO: Created: latency-svc-p9jw2 May 25 10:09:01.842: INFO: Got endpoints: latency-svc-cwdbw [183.067579ms] May 25 10:09:01.850: INFO: Created: latency-svc-lf4w9 May 25 10:09:01.891: INFO: Got endpoints: latency-svc-pzbd7 [229.748201ms] May 25 10:09:01.898: INFO: Created: latency-svc-pxq4h May 25 10:09:01.940: INFO: Got endpoints: latency-svc-7mdm8 [277.890321ms] May 25 10:09:01.948: INFO: Created: latency-svc-tz7pn May 25 10:09:01.991: INFO: Got endpoints: latency-svc-m6g9l [326.634744ms] May 25 10:09:01.999: INFO: Created: latency-svc-97twp May 25 10:09:02.042: INFO: Got endpoints: latency-svc-t5hb9 [374.903646ms] May 25 10:09:02.049: INFO: Created: latency-svc-phvds May 25 10:09:02.092: INFO: Got endpoints: latency-svc-lwmmc [422.668945ms] May 25 10:09:02.099: INFO: Created: latency-svc-5gc7p May 25 10:09:02.140: INFO: Got endpoints: latency-svc-r4rj4 [469.636227ms] May 25 10:09:02.147: INFO: Created: latency-svc-jm5xz May 25 10:09:02.192: INFO: Got endpoints: latency-svc-r2wc7 [518.138099ms] May 25 10:09:02.200: INFO: Created: latency-svc-z5mrl May 25 10:09:02.241: INFO: Got endpoints: latency-svc-qqppc [561.06459ms] May 25 10:09:02.248: INFO: Created: latency-svc-wjl7v May 25 10:09:02.293: INFO: Got endpoints: latency-svc-qhmhf [609.493718ms] May 25 10:09:02.301: INFO: Created: latency-svc-2zvzd May 25 10:09:02.343: INFO: Got endpoints: latency-svc-5jcrz [655.943268ms] May 25 10:09:02.350: INFO: Created: latency-svc-vx4h9 May 25 10:09:02.390: INFO: Got endpoints: latency-svc-rcr5c [702.186467ms] May 25 10:09:02.397: INFO: Created: latency-svc-wp2w7 May 25 10:09:02.441: INFO: Got endpoints: latency-svc-vhbxn [751.500101ms] May 25 10:09:02.447: INFO: Created: latency-svc-llbgh May 25 10:09:02.491: INFO: Got endpoints: latency-svc-xv8hw [749.996779ms] May 25 10:09:02.498: INFO: Created: latency-svc-m8bkx May 25 10:09:02.541: INFO: Got endpoints: latency-svc-p9jw2 [748.138992ms] May 25 10:09:02.552: INFO: Created: latency-svc-57bvb May 25 10:09:02.597: INFO: Got endpoints: latency-svc-lf4w9 [755.158918ms] May 25 10:09:02.604: INFO: Created: latency-svc-mvxt5 May 25 10:09:02.642: INFO: Got endpoints: latency-svc-pxq4h [751.490677ms] May 25 10:09:02.652: INFO: Created: latency-svc-rml7w May 25 10:09:02.691: INFO: Got endpoints: latency-svc-tz7pn [749.990441ms] May 25 10:09:02.701: INFO: Created: latency-svc-f8lsv May 25 10:09:02.741: INFO: Got endpoints: latency-svc-97twp [749.815798ms] May 25 10:09:02.748: INFO: Created: latency-svc-khdk7 May 25 10:09:02.791: INFO: Got endpoints: latency-svc-phvds [748.678523ms] May 25 10:09:02.799: INFO: Created: latency-svc-gtzlg May 25 10:09:02.842: INFO: Got endpoints: latency-svc-5gc7p [749.989412ms] May 25 10:09:02.849: INFO: Created: latency-svc-lc2xh May 25 10:09:02.891: INFO: Got endpoints: latency-svc-jm5xz [750.434507ms] May 25 10:09:02.898: INFO: Created: latency-svc-qxrts May 25 10:09:02.945: INFO: Got endpoints: latency-svc-z5mrl [753.076423ms] May 25 10:09:02.952: INFO: Created: latency-svc-qfbsp May 25 10:09:02.991: INFO: Got endpoints: latency-svc-wjl7v [750.205379ms] May 25 10:09:02.999: INFO: Created: latency-svc-nxmpv May 25 10:09:03.042: INFO: Got endpoints: latency-svc-2zvzd [749.064338ms] May 25 10:09:03.054: INFO: Created: latency-svc-wzxhc May 25 10:09:03.092: INFO: Got endpoints: latency-svc-vx4h9 [748.962694ms] May 25 10:09:03.099: INFO: Created: latency-svc-zzcl6 May 25 10:09:03.141: INFO: Got endpoints: latency-svc-wp2w7 [750.850166ms] May 25 10:09:03.148: INFO: Created: latency-svc-d8vz4 May 25 10:09:03.191: INFO: Got endpoints: latency-svc-llbgh [750.573649ms] May 25 10:09:03.199: INFO: Created: latency-svc-xw28r May 25 10:09:03.241: INFO: Got endpoints: latency-svc-m8bkx [749.52022ms] May 25 10:09:03.248: INFO: Created: latency-svc-pnnzj May 25 10:09:03.291: INFO: Got endpoints: latency-svc-57bvb [750.535182ms] May 25 10:09:03.298: INFO: Created: latency-svc-866g4 May 25 10:09:03.340: INFO: Got endpoints: latency-svc-mvxt5 [743.508274ms] May 25 10:09:03.348: INFO: Created: latency-svc-m9dm6 May 25 10:09:03.391: INFO: Got endpoints: latency-svc-rml7w [749.081164ms] May 25 10:09:03.399: INFO: Created: latency-svc-xd489 May 25 10:09:03.442: INFO: Got endpoints: latency-svc-f8lsv [751.021054ms] May 25 10:09:03.449: INFO: Created: latency-svc-t8zqd May 25 10:09:03.491: INFO: Got endpoints: latency-svc-khdk7 [750.119208ms] May 25 10:09:03.499: INFO: Created: latency-svc-xjz2s May 25 10:09:03.541: INFO: Got endpoints: latency-svc-gtzlg [750.438149ms] May 25 10:09:03.548: INFO: Created: latency-svc-xkzvn May 25 10:09:03.591: INFO: Got endpoints: latency-svc-lc2xh [748.966733ms] May 25 10:09:03.599: INFO: Created: latency-svc-txn6g May 25 10:09:03.642: INFO: Got endpoints: latency-svc-qxrts [751.231365ms] May 25 10:09:03.650: INFO: Created: latency-svc-p5sm2 May 25 10:09:03.691: INFO: Got endpoints: latency-svc-qfbsp [745.614381ms] May 25 10:09:03.699: INFO: Created: latency-svc-mmtzk May 25 10:09:03.740: INFO: Got endpoints: latency-svc-nxmpv [749.0187ms] May 25 10:09:03.747: INFO: Created: latency-svc-86ckz May 25 10:09:03.791: INFO: Got endpoints: latency-svc-wzxhc [748.692474ms] May 25 10:09:03.799: INFO: Created: latency-svc-qjl8z May 25 10:09:03.841: INFO: Got endpoints: latency-svc-zzcl6 [749.757093ms] May 25 10:09:03.849: INFO: Created: latency-svc-dndr5 May 25 10:09:03.892: INFO: Got endpoints: latency-svc-d8vz4 [750.939435ms] May 25 10:09:03.899: INFO: Created: latency-svc-nmptq May 25 10:09:03.945: INFO: Got endpoints: latency-svc-xw28r [753.70857ms] May 25 10:09:03.953: INFO: Created: latency-svc-7gpnk May 25 10:09:03.991: INFO: Got endpoints: latency-svc-pnnzj [750.592636ms] May 25 10:09:03.999: INFO: Created: latency-svc-tsfxb May 25 10:09:04.041: INFO: Got endpoints: latency-svc-866g4 [749.419742ms] May 25 10:09:04.049: INFO: Created: latency-svc-nn5v8 May 25 10:09:04.091: INFO: Got endpoints: latency-svc-m9dm6 [750.859911ms] May 25 10:09:04.099: INFO: Created: latency-svc-9fdr2 May 25 10:09:04.141: INFO: Got endpoints: latency-svc-xd489 [749.930175ms] May 25 10:09:04.149: INFO: Created: latency-svc-sm5j2 May 25 10:09:04.191: INFO: Got endpoints: latency-svc-t8zqd [749.127682ms] May 25 10:09:04.198: INFO: Created: latency-svc-lf4dx May 25 10:09:04.241: INFO: Got endpoints: latency-svc-xjz2s [749.480069ms] May 25 10:09:04.248: INFO: Created: latency-svc-s2wmt May 25 10:09:04.290: INFO: Got endpoints: latency-svc-xkzvn [749.285909ms] May 25 10:09:04.298: INFO: Created: latency-svc-sszrb May 25 10:09:04.341: INFO: Got endpoints: latency-svc-txn6g [750.66541ms] May 25 10:09:04.349: INFO: Created: latency-svc-pvlmx May 25 10:09:04.391: INFO: Got endpoints: latency-svc-p5sm2 [748.722544ms] May 25 10:09:04.398: INFO: Created: latency-svc-8xcz7 May 25 10:09:04.441: INFO: Got endpoints: latency-svc-mmtzk [750.287988ms] May 25 10:09:04.448: INFO: Created: latency-svc-jx2hg May 25 10:09:04.490: INFO: Got endpoints: latency-svc-86ckz [749.864828ms] May 25 10:09:04.497: INFO: Created: latency-svc-n54nq May 25 10:09:04.541: INFO: Got endpoints: latency-svc-qjl8z [750.596305ms] May 25 10:09:04.549: INFO: Created: latency-svc-wnzd5 May 25 10:09:04.590: INFO: Got endpoints: latency-svc-dndr5 [748.681555ms] May 25 10:09:04.598: INFO: Created: latency-svc-57lkj May 25 10:09:04.641: INFO: Got endpoints: latency-svc-nmptq [749.629511ms] May 25 10:09:04.649: INFO: Created: latency-svc-97zc8 May 25 10:09:04.691: INFO: Got endpoints: latency-svc-7gpnk [745.957428ms] May 25 10:09:04.699: INFO: Created: latency-svc-6rgvs May 25 10:09:04.740: INFO: Got endpoints: latency-svc-tsfxb [749.119693ms] May 25 10:09:04.748: INFO: Created: latency-svc-mtqq2 May 25 10:09:04.791: INFO: Got endpoints: latency-svc-nn5v8 [750.120547ms] May 25 10:09:04.799: INFO: Created: latency-svc-z2gt7 May 25 10:09:04.842: INFO: Got endpoints: latency-svc-9fdr2 [751.031163ms] May 25 10:09:04.850: INFO: Created: latency-svc-jtqqt May 25 10:09:04.892: INFO: Got endpoints: latency-svc-sm5j2 [750.268505ms] May 25 10:09:04.899: INFO: Created: latency-svc-l7d7b May 25 10:09:04.942: INFO: Got endpoints: latency-svc-lf4dx [751.41539ms] May 25 10:09:04.950: INFO: Created: latency-svc-2mbdx May 25 10:09:04.991: INFO: Got endpoints: latency-svc-s2wmt [750.009939ms] May 25 10:09:04.999: INFO: Created: latency-svc-mlprx May 25 10:09:05.041: INFO: Got endpoints: latency-svc-sszrb [750.649476ms] May 25 10:09:05.048: INFO: Created: latency-svc-gt527 May 25 10:09:05.091: INFO: Got endpoints: latency-svc-pvlmx [749.473679ms] May 25 10:09:05.099: INFO: Created: latency-svc-llkbp May 25 10:09:05.141: INFO: Got endpoints: latency-svc-8xcz7 [749.763896ms] May 25 10:09:05.149: INFO: Created: latency-svc-7zmjs May 25 10:09:05.192: INFO: Got endpoints: latency-svc-jx2hg [750.852009ms] May 25 10:09:05.199: INFO: Created: latency-svc-vtn2d May 25 10:09:05.241: INFO: Got endpoints: latency-svc-n54nq [750.788628ms] May 25 10:09:05.249: INFO: Created: latency-svc-fmpcx May 25 10:09:05.290: INFO: Got endpoints: latency-svc-wnzd5 [748.555533ms] May 25 10:09:05.298: INFO: Created: latency-svc-dnvvj May 25 10:09:05.341: INFO: Got endpoints: latency-svc-57lkj [750.777097ms] May 25 10:09:05.348: INFO: Created: latency-svc-7lswj May 25 10:09:05.391: INFO: Got endpoints: latency-svc-97zc8 [749.64757ms] May 25 10:09:05.399: INFO: Created: latency-svc-pnktq May 25 10:09:05.443: INFO: Got endpoints: latency-svc-6rgvs [751.728725ms] May 25 10:09:05.451: INFO: Created: latency-svc-mmg46 May 25 10:09:05.492: INFO: Got endpoints: latency-svc-mtqq2 [751.742718ms] May 25 10:09:05.503: INFO: Created: latency-svc-vncbt May 25 10:09:05.541: INFO: Got endpoints: latency-svc-z2gt7 [750.299707ms] May 25 10:09:05.549: INFO: Created: latency-svc-t5gsm May 25 10:09:05.591: INFO: Got endpoints: latency-svc-jtqqt [748.224646ms] May 25 10:09:05.599: INFO: Created: latency-svc-tnjfr May 25 10:09:05.641: INFO: Got endpoints: latency-svc-l7d7b [749.337214ms] May 25 10:09:05.652: INFO: Created: latency-svc-w2gwd May 25 10:09:05.741: INFO: Got endpoints: latency-svc-2mbdx [798.555697ms] May 25 10:09:05.749: INFO: Created: latency-svc-cj4pk May 25 10:09:05.792: INFO: Got endpoints: latency-svc-mlprx [800.482902ms] May 25 10:09:05.799: INFO: Created: latency-svc-2q64v May 25 10:09:05.841: INFO: Got endpoints: latency-svc-gt527 [799.696476ms] May 25 10:09:05.847: INFO: Created: latency-svc-8pdpq May 25 10:09:05.890: INFO: Got endpoints: latency-svc-llkbp [799.275046ms] May 25 10:09:05.898: INFO: Created: latency-svc-8vd9m May 25 10:09:05.940: INFO: Got endpoints: latency-svc-7zmjs [799.594511ms] May 25 10:09:05.948: INFO: Created: latency-svc-d4c2r May 25 10:09:05.992: INFO: Got endpoints: latency-svc-vtn2d [799.893197ms] May 25 10:09:05.999: INFO: Created: latency-svc-nccrq May 25 10:09:06.042: INFO: Got endpoints: latency-svc-fmpcx [800.285698ms] May 25 10:09:06.049: INFO: Created: latency-svc-vgslv May 25 10:09:06.091: INFO: Got endpoints: latency-svc-dnvvj [801.340728ms] May 25 10:09:06.099: INFO: Created: latency-svc-86rl8 May 25 10:09:06.141: INFO: Got endpoints: latency-svc-7lswj [800.210482ms] May 25 10:09:06.149: INFO: Created: latency-svc-256nw May 25 10:09:06.191: INFO: Got endpoints: latency-svc-pnktq [800.219519ms] May 25 10:09:06.198: INFO: Created: latency-svc-4sh54 May 25 10:09:06.241: INFO: Got endpoints: latency-svc-mmg46 [797.777535ms] May 25 10:09:06.248: INFO: Created: latency-svc-dxk6m May 25 10:09:06.290: INFO: Got endpoints: latency-svc-vncbt [797.986228ms] May 25 10:09:06.298: INFO: Created: latency-svc-djb6g May 25 10:09:06.340: INFO: Got endpoints: latency-svc-t5gsm [798.835383ms] May 25 10:09:06.348: INFO: Created: latency-svc-wvzbc May 25 10:09:06.393: INFO: Got endpoints: latency-svc-tnjfr [801.83603ms] May 25 10:09:06.400: INFO: Created: latency-svc-w4889 May 25 10:09:06.441: INFO: Got endpoints: latency-svc-w2gwd [800.155923ms] May 25 10:09:06.484: INFO: Created: latency-svc-67mf5 May 25 10:09:06.491: INFO: Got endpoints: latency-svc-cj4pk [750.202912ms] May 25 10:09:06.499: INFO: Created: latency-svc-tb9zl May 25 10:09:06.545: INFO: Got endpoints: latency-svc-2q64v [753.826146ms] May 25 10:09:06.552: INFO: Created: latency-svc-52d72 May 25 10:09:06.591: INFO: Got endpoints: latency-svc-8pdpq [750.003548ms] May 25 10:09:06.598: INFO: Created: latency-svc-kwvr8 May 25 10:09:06.641: INFO: Got endpoints: latency-svc-8vd9m [750.217825ms] May 25 10:09:06.648: INFO: Created: latency-svc-x7h8g May 25 10:09:06.691: INFO: Got endpoints: latency-svc-d4c2r [750.938726ms] May 25 10:09:06.698: INFO: Created: latency-svc-jbrkd May 25 10:09:06.742: INFO: Got endpoints: latency-svc-nccrq [749.758496ms] May 25 10:09:06.749: INFO: Created: latency-svc-c6zhn May 25 10:09:06.791: INFO: Got endpoints: latency-svc-vgslv [749.204074ms] May 25 10:09:06.799: INFO: Created: latency-svc-szt5t May 25 10:09:06.842: INFO: Got endpoints: latency-svc-86rl8 [750.079766ms] May 25 10:09:06.850: INFO: Created: latency-svc-kqhjp May 25 10:09:06.892: INFO: Got endpoints: latency-svc-256nw [750.441561ms] May 25 10:09:06.899: INFO: Created: latency-svc-qmbks May 25 10:09:06.942: INFO: Got endpoints: latency-svc-4sh54 [750.450476ms] May 25 10:09:06.950: INFO: Created: latency-svc-28ztr May 25 10:09:06.990: INFO: Got endpoints: latency-svc-dxk6m [749.205856ms] May 25 10:09:06.998: INFO: Created: latency-svc-ggfj5 May 25 10:09:07.091: INFO: Got endpoints: latency-svc-djb6g [800.781745ms] May 25 10:09:07.099: INFO: Created: latency-svc-l4w56 May 25 10:09:07.140: INFO: Got endpoints: latency-svc-wvzbc [800.196246ms] May 25 10:09:07.147: INFO: Created: latency-svc-hznfb May 25 10:09:07.191: INFO: Got endpoints: latency-svc-w4889 [798.374036ms] May 25 10:09:07.199: INFO: Created: latency-svc-qs5pd May 25 10:09:07.241: INFO: Got endpoints: latency-svc-67mf5 [799.126848ms] May 25 10:09:07.248: INFO: Created: latency-svc-bnggz May 25 10:09:07.291: INFO: Got endpoints: latency-svc-tb9zl [799.981611ms] May 25 10:09:07.298: INFO: Created: latency-svc-dm2km May 25 10:09:07.341: INFO: Got endpoints: latency-svc-52d72 [795.640989ms] May 25 10:09:07.349: INFO: Created: latency-svc-26c6r May 25 10:09:07.392: INFO: Got endpoints: latency-svc-kwvr8 [801.043596ms] May 25 10:09:07.400: INFO: Created: latency-svc-rd7q8 May 25 10:09:07.442: INFO: Got endpoints: latency-svc-x7h8g [801.051013ms] May 25 10:09:07.449: INFO: Created: latency-svc-d4zrd May 25 10:09:07.491: INFO: Got endpoints: latency-svc-jbrkd [799.300944ms] May 25 10:09:07.497: INFO: Created: latency-svc-cnvtx May 25 10:09:07.541: INFO: Got endpoints: latency-svc-c6zhn [799.517457ms] May 25 10:09:07.549: INFO: Created: latency-svc-l4m46 May 25 10:09:07.593: INFO: Got endpoints: latency-svc-szt5t [802.218368ms] May 25 10:09:07.600: INFO: Created: latency-svc-9j97z May 25 10:09:07.640: INFO: Got endpoints: latency-svc-kqhjp [798.594372ms] May 25 10:09:07.647: INFO: Created: latency-svc-98j5g May 25 10:09:07.691: INFO: Got endpoints: latency-svc-qmbks [798.931275ms] May 25 10:09:07.699: INFO: Created: latency-svc-66l4f May 25 10:09:07.742: INFO: Got endpoints: latency-svc-28ztr [799.715268ms] May 25 10:09:07.748: INFO: Created: latency-svc-bhd6s May 25 10:09:07.792: INFO: Got endpoints: latency-svc-ggfj5 [801.282857ms] May 25 10:09:07.799: INFO: Created: latency-svc-6j6sx May 25 10:09:07.844: INFO: Got endpoints: latency-svc-l4w56 [752.897163ms] May 25 10:09:07.851: INFO: Created: latency-svc-2l4gb May 25 10:09:07.891: INFO: Got endpoints: latency-svc-hznfb [750.951045ms] May 25 10:09:07.900: INFO: Created: latency-svc-k7x85 May 25 10:09:07.942: INFO: Got endpoints: latency-svc-qs5pd [750.478695ms] May 25 10:09:07.949: INFO: Created: latency-svc-ztxc9 May 25 10:09:07.991: INFO: Got endpoints: latency-svc-bnggz [750.342986ms] May 25 10:09:07.998: INFO: Created: latency-svc-gfspg May 25 10:09:08.041: INFO: Got endpoints: latency-svc-dm2km [749.936312ms] May 25 10:09:08.050: INFO: Created: latency-svc-bf84p May 25 10:09:08.093: INFO: Got endpoints: latency-svc-26c6r [751.291226ms] May 25 10:09:08.100: INFO: Created: latency-svc-rkp9f May 25 10:09:08.141: INFO: Got endpoints: latency-svc-rd7q8 [748.933553ms] May 25 10:09:08.149: INFO: Created: latency-svc-w94xq May 25 10:09:08.191: INFO: Got endpoints: latency-svc-d4zrd [749.116923ms] May 25 10:09:08.198: INFO: Created: latency-svc-zx59t May 25 10:09:08.240: INFO: Got endpoints: latency-svc-cnvtx [749.623248ms] May 25 10:09:08.247: INFO: Created: latency-svc-lpqqn May 25 10:09:08.292: INFO: Got endpoints: latency-svc-l4m46 [750.271599ms] May 25 10:09:08.299: INFO: Created: latency-svc-9s4sd May 25 10:09:08.340: INFO: Got endpoints: latency-svc-9j97z [746.278591ms] May 25 10:09:08.347: INFO: Created: latency-svc-l52ng May 25 10:09:08.392: INFO: Got endpoints: latency-svc-98j5g [751.972101ms] May 25 10:09:08.399: INFO: Created: latency-svc-vfts8 May 25 10:09:08.441: INFO: Got endpoints: latency-svc-66l4f [750.683829ms] May 25 10:09:08.449: INFO: Created: latency-svc-2v2cr May 25 10:09:08.491: INFO: Got endpoints: latency-svc-bhd6s [749.812397ms] May 25 10:09:08.499: INFO: Created: latency-svc-67fbg May 25 10:09:08.542: INFO: Got endpoints: latency-svc-6j6sx [749.864026ms] May 25 10:09:08.549: INFO: Created: latency-svc-cqmt2 May 25 10:09:08.595: INFO: Got endpoints: latency-svc-2l4gb [750.853585ms] May 25 10:09:08.601: INFO: Created: latency-svc-227fc May 25 10:09:08.640: INFO: Got endpoints: latency-svc-k7x85 [748.903412ms] May 25 10:09:08.648: INFO: Created: latency-svc-qrzs9 May 25 10:09:08.691: INFO: Got endpoints: latency-svc-ztxc9 [748.824596ms] May 25 10:09:08.779: INFO: Got endpoints: latency-svc-gfspg [788.322588ms] May 25 10:09:08.782: INFO: Created: latency-svc-w7whs May 25 10:09:08.786: INFO: Created: latency-svc-zmh8j May 25 10:09:08.791: INFO: Got endpoints: latency-svc-bf84p [750.149898ms] May 25 10:09:08.879: INFO: Got endpoints: latency-svc-rkp9f [786.382488ms] May 25 10:09:08.981: INFO: Got endpoints: latency-svc-w94xq [840.177214ms] May 25 10:09:08.982: INFO: Got endpoints: latency-svc-zx59t [790.987231ms] May 25 10:09:09.079: INFO: Created: latency-svc-2dxjq May 25 10:09:09.080: INFO: Got endpoints: latency-svc-9s4sd [788.310887ms] May 25 10:09:09.080: INFO: Got endpoints: latency-svc-lpqqn [839.534498ms] May 25 10:09:09.182: INFO: Got endpoints: latency-svc-vfts8 [789.212637ms] May 25 10:09:09.182: INFO: Got endpoints: latency-svc-l52ng [841.945345ms] May 25 10:09:09.182: INFO: Created: latency-svc-rhfrc May 25 10:09:09.189: INFO: Created: latency-svc-85c8x May 25 10:09:09.192: INFO: Got endpoints: latency-svc-2v2cr [750.744366ms] May 25 10:09:09.196: INFO: Created: latency-svc-tqj8m May 25 10:09:09.202: INFO: Created: latency-svc-vxbjn May 25 10:09:09.390: INFO: Created: latency-svc-fb7fx May 25 10:09:09.390: INFO: Got endpoints: latency-svc-67fbg [898.649681ms] May 25 10:09:09.391: INFO: Got endpoints: latency-svc-cqmt2 [849.07176ms] May 25 10:09:09.391: INFO: Got endpoints: latency-svc-227fc [795.638556ms] May 25 10:09:09.480: INFO: Created: latency-svc-jdx9r May 25 10:09:09.480: INFO: Got endpoints: latency-svc-qrzs9 [839.145518ms] May 25 10:09:09.480: INFO: Got endpoints: latency-svc-w7whs [789.230654ms] May 25 10:09:09.683: INFO: Got endpoints: latency-svc-2dxjq [891.649854ms] May 25 10:09:09.689: INFO: Got endpoints: latency-svc-zmh8j [909.128817ms] May 25 10:09:09.689: INFO: Got endpoints: latency-svc-rhfrc [809.772963ms] May 25 10:09:09.689: INFO: Got endpoints: latency-svc-85c8x [707.334274ms] May 25 10:09:10.079: INFO: Got endpoints: latency-svc-tqj8m [1.097008053s] May 25 10:09:10.079: INFO: Got endpoints: latency-svc-vxbjn [999.261501ms] May 25 10:09:10.079: INFO: Got endpoints: latency-svc-fb7fx [999.401782ms] May 25 10:09:10.080: INFO: Got endpoints: latency-svc-jdx9r [898.394077ms] May 25 10:09:10.178: INFO: Created: latency-svc-p9j4n May 25 10:09:10.282: INFO: Got endpoints: latency-svc-p9j4n [1.100783409s] May 25 10:09:10.285: INFO: Created: latency-svc-6kzl6 May 25 10:09:10.289: INFO: Created: latency-svc-q2p7k May 25 10:09:10.292: INFO: Got endpoints: latency-svc-6kzl6 [1.099776029s] May 25 10:09:10.293: INFO: Got endpoints: latency-svc-q2p7k [902.363347ms] May 25 10:09:10.295: INFO: Created: latency-svc-4h8ml May 25 10:09:10.299: INFO: Got endpoints: latency-svc-4h8ml [908.717198ms] May 25 10:09:10.302: INFO: Created: latency-svc-8rs5j May 25 10:09:10.305: INFO: Got endpoints: latency-svc-8rs5j [913.904178ms] May 25 10:09:10.305: INFO: Created: latency-svc-cbzbl May 25 10:09:10.311: INFO: Created: latency-svc-d7ftk May 25 10:09:10.311: INFO: Got endpoints: latency-svc-cbzbl [831.593353ms] May 25 10:09:10.314: INFO: Got endpoints: latency-svc-d7ftk [834.341514ms] May 25 10:09:10.316: INFO: Created: latency-svc-dxb4q May 25 10:09:10.320: INFO: Got endpoints: latency-svc-dxb4q [636.689955ms] May 25 10:09:10.320: INFO: Latencies: [10.941625ms 14.175325ms 16.303998ms 18.716531ms 21.392145ms 24.148425ms 26.59187ms 30.943109ms 35.459057ms 37.273043ms 38.309514ms 38.652311ms 38.89271ms 39.876516ms 40.085146ms 40.360891ms 40.557517ms 40.708471ms 41.024827ms 42.788844ms 44.715493ms 45.528176ms 46.098092ms 46.420945ms 46.951571ms 46.962929ms 47.689167ms 48.542638ms 49.316326ms 52.40386ms 55.577137ms 86.205246ms 136.30789ms 183.067579ms 229.748201ms 277.890321ms 326.634744ms 374.903646ms 422.668945ms 469.636227ms 518.138099ms 561.06459ms 609.493718ms 636.689955ms 655.943268ms 702.186467ms 707.334274ms 743.508274ms 745.614381ms 745.957428ms 746.278591ms 748.138992ms 748.224646ms 748.555533ms 748.678523ms 748.681555ms 748.692474ms 748.722544ms 748.824596ms 748.903412ms 748.933553ms 748.962694ms 748.966733ms 749.0187ms 749.064338ms 749.081164ms 749.116923ms 749.119693ms 749.127682ms 749.204074ms 749.205856ms 749.285909ms 749.337214ms 749.419742ms 749.473679ms 749.480069ms 749.52022ms 749.623248ms 749.629511ms 749.64757ms 749.757093ms 749.758496ms 749.763896ms 749.812397ms 749.815798ms 749.864026ms 749.864828ms 749.930175ms 749.936312ms 749.989412ms 749.990441ms 749.996779ms 750.003548ms 750.009939ms 750.079766ms 750.119208ms 750.120547ms 750.149898ms 750.202912ms 750.205379ms 750.217825ms 750.268505ms 750.271599ms 750.287988ms 750.299707ms 750.342986ms 750.434507ms 750.438149ms 750.441561ms 750.450476ms 750.478695ms 750.535182ms 750.573649ms 750.592636ms 750.596305ms 750.649476ms 750.66541ms 750.683829ms 750.744366ms 750.777097ms 750.788628ms 750.850166ms 750.852009ms 750.853585ms 750.859911ms 750.938726ms 750.939435ms 750.951045ms 751.021054ms 751.031163ms 751.231365ms 751.291226ms 751.41539ms 751.490677ms 751.500101ms 751.728725ms 751.742718ms 751.972101ms 752.897163ms 753.076423ms 753.70857ms 753.826146ms 755.158918ms 786.382488ms 788.310887ms 788.322588ms 789.212637ms 789.230654ms 790.987231ms 795.638556ms 795.640989ms 797.777535ms 797.986228ms 798.374036ms 798.555697ms 798.594372ms 798.835383ms 798.931275ms 799.126848ms 799.275046ms 799.300944ms 799.517457ms 799.594511ms 799.696476ms 799.715268ms 799.893197ms 799.981611ms 800.155923ms 800.196246ms 800.210482ms 800.219519ms 800.285698ms 800.482902ms 800.781745ms 801.043596ms 801.051013ms 801.282857ms 801.340728ms 801.83603ms 802.218368ms 809.772963ms 831.593353ms 834.341514ms 839.145518ms 839.534498ms 840.177214ms 841.945345ms 849.07176ms 891.649854ms 898.394077ms 898.649681ms 902.363347ms 908.717198ms 909.128817ms 913.904178ms 999.261501ms 999.401782ms 1.097008053s 1.099776029s 1.100783409s] May 25 10:09:10.320: INFO: 50 %ile: 750.217825ms May 25 10:09:10.320: INFO: 90 %ile: 809.772963ms May 25 10:09:10.320: INFO: 99 %ile: 1.099776029s May 25 10:09:10.320: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:10.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8033" for this suite. • [SLOW TEST:12.964 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":7,"skipped":132,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:07.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 25 10:09:10.296: INFO: Successfully updated pod "adopt-release-25j78" STEP: Checking that the Job readopts the Pod May 25 10:09:10.296: INFO: Waiting up to 15m0s for pod "adopt-release-25j78" in namespace "job-9580" to be "adopted" May 25 10:09:10.299: INFO: Pod "adopt-release-25j78": Phase="Running", Reason="", readiness=true. Elapsed: 3.723382ms May 25 10:09:12.678: INFO: Pod "adopt-release-25j78": Phase="Running", Reason="", readiness=true. Elapsed: 2.381868052s May 25 10:09:12.678: INFO: Pod "adopt-release-25j78" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 25 10:09:13.279: INFO: Successfully updated pod "adopt-release-25j78" STEP: Checking that the Job releases the Pod May 25 10:09:13.279: INFO: Waiting up to 15m0s for pod "adopt-release-25j78" in namespace "job-9580" to be "released" May 25 10:09:13.390: INFO: Pod "adopt-release-25j78": Phase="Running", Reason="", readiness=true. Elapsed: 111.255727ms May 25 10:09:13.390: INFO: Pod "adopt-release-25j78" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:13.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9580" for this suite. • [SLOW TEST:6.773 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":159,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0525 10:08:14.210837 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.210: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.213: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8087 STEP: creating service affinity-nodeport-transition in namespace services-8087 STEP: creating replication controller affinity-nodeport-transition in namespace services-8087 I0525 10:08:14.226337 24 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8087, replica count: 3 I0525 10:08:17.279238 24 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:20.279488 24 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:23.280389 24 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:26.282806 24 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:08:29.283667 24 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:08:29.294: INFO: Creating new exec pod May 25 10:08:32.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' May 25 10:08:33.018: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 25 10:08:33.018: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:33.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.81.148 80' May 25 10:08:33.263: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.81.148 80\nConnection to 10.96.81.148 80 port [tcp/http] succeeded!\n" May 25 10:08:33.263: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:33.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30679' May 25 10:08:33.497: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30679\nConnection to 172.18.0.4 30679 port [tcp/*] succeeded!\n" May 25 10:08:33.497: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:33.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.2 30679' May 25 10:08:33.732: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.2 30679\nConnection to 172.18.0.2 30679 port [tcp/*] succeeded!\n" May 25 10:08:33.732: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:08:33.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30679/ ; done' May 25 10:08:34.104: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n" May 25 10:08:34.104: INFO: stdout: "\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-znvmp" May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.104: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30679/ ; done' May 25 10:08:34.435: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n" May 25 10:08:34.435: INFO: stdout: "\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-4q6tv\naffinity-nodeport-transition-znvmp\naffinity-nodeport-transition-4q6tv" May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-znvmp May 25 10:08:34.435: INFO: Received response from host: affinity-nodeport-transition-4q6tv May 25 10:09:04.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-8087 exec execpod-affinityn8mf6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30679/ ; done' May 25 10:09:04.831: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30679/\n" May 25 10:09:04.831: INFO: stdout: "\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl\naffinity-nodeport-transition-p8mtl" May 25 10:09:04.831: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.831: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.831: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.831: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Received response from host: affinity-nodeport-transition-p8mtl May 25 10:09:04.832: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8087, will wait for the garbage collector to delete the pods May 25 10:09:04.902: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.928417ms May 25 10:09:05.003: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.720636ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:15.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8087" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:61.594 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset W0525 10:08:14.185789 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.185: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.193: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2521 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-2521 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2521 May 25 10:08:14.204: INFO: Found 0 stateful pods, waiting for 1 May 25 10:08:24.209: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 25 10:08:34.209: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 25 10:08:34.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:08:34.688: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:08:34.688: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:08:34.688: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:08:34.692: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 25 10:08:44.696: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 10:08:44.697: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:08:44.711: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:08:44.711: INFO: ss-0 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:08:44.711: INFO: May 25 10:08:44.711: INFO: StatefulSet ss has not reached scale 3, at 1 May 25 10:08:45.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996687029s May 25 10:08:46.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992688195s May 25 10:08:47.724: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98818753s May 25 10:08:48.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98369231s May 25 10:08:49.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978518125s May 25 10:08:50.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974150895s May 25 10:08:51.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.969925935s May 25 10:08:52.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.964402247s May 25 10:08:53.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 959.606598ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2521 May 25 10:08:54.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:08:55.017: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:08:55.017: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:08:55.017: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:08:55.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:08:55.268: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 25 10:08:55.268: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:08:55.268: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:08:55.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:08:55.500: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 25 10:08:55.500: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:08:55.500: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:08:55.504: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:08:55.504: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:08:55.504: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 25 10:08:55.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:08:55.758: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:08:55.758: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:08:55.758: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:08:55.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:08:55.990: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:08:55.990: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:08:55.990: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:08:55.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-2521 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:08:56.230: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:08:56.230: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:08:56.230: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:08:56.230: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:08:56.233: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 25 10:09:06.243: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 10:09:06.243: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 25 10:09:06.243: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 25 10:09:06.256: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:06.256: INFO: ss-0 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:06.256: INFO: ss-1 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:06.256: INFO: ss-2 v1.21-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:06.257: INFO: May 25 10:09:06.257: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:07.261: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:07.261: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:07.261: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:07.261: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:07.261: INFO: May 25 10:09:07.261: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:08.265: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:08.265: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:08.265: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:08.265: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:08.265: INFO: May 25 10:09:08.265: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:09.391: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:09.391: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:09.391: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:09.392: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:09.392: INFO: May 25 10:09:09.392: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:10.482: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:10.482: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:10.482: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:10.482: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:10.482: INFO: May 25 10:09:10.482: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:11.780: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:11.780: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:11.780: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:11.780: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:11.780: INFO: May 25 10:09:11.780: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:12.883: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:12.883: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:12.883: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:12.883: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:12.884: INFO: May 25 10:09:12.884: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:13.984: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:13.984: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:13.984: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:13.984: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:13.984: INFO: May 25 10:09:13.984: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:15.081: INFO: POD NODE PHASE GRACE CONDITIONS May 25 10:09:15.081: INFO: ss-0 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:14 +0000 UTC }] May 25 10:09:15.081: INFO: ss-1 v1.21-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:15.081: INFO: ss-2 v1.21-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:08:44 +0000 UTC }] May 25 10:09:15.081: INFO: May 25 10:09:15.081: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 10:09:16.084: INFO: Verifying statefulset ss doesn't scale past 0 for another 170.649037ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2521 May 25 10:09:17.087: INFO: Scaling statefulset ss to 0 May 25 10:09:17.098: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:09:17.100: INFO: Deleting all statefulset in ns statefulset-2521 May 25 10:09:17.103: INFO: Scaling statefulset ss to 0 May 25 10:09:17.113: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:09:17.116: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:17.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2521" for this suite. • [SLOW TEST:62.980 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:07.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod May 25 10:09:07.710: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:10.184: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:11.778: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod May 25 10:09:12.284: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:14.479: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:16.288: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:18.289: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 25 10:09:18.292: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.293: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.421: INFO: Exec stderr: "" May 25 10:09:18.421: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.421: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.502: INFO: Exec stderr: "" May 25 10:09:18.503: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.503: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.624: INFO: Exec stderr: "" May 25 10:09:18.625: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.625: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.743: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 25 10:09:18.743: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.743: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.874: INFO: Exec stderr: "" May 25 10:09:18.874: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.874: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:18.984: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 25 10:09:18.984: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:18.984: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:19.089: INFO: Exec stderr: "" May 25 10:09:19.089: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:19.089: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:19.171: INFO: Exec stderr: "" May 25 10:09:19.171: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:19.171: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:19.276: INFO: Exec stderr: "" May 25 10:09:19.276: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1327 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:09:19.276: INFO: >>> kubeConfig: /root/.kube/config May 25 10:09:19.401: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1327" for this suite. • [SLOW TEST:11.739 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":138,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:19.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6090" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:15.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 10:09:15.997: INFO: Waiting up to 5m0s for pod "pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6" in namespace "emptydir-1309" to be "Succeeded or Failed" May 25 10:09:16.002: INFO: Pod "pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.033714ms May 25 10:09:18.009: INFO: Pod "pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011788298s May 25 10:09:20.012: INFO: Pod "pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015376266s STEP: Saw pod success May 25 10:09:20.012: INFO: Pod "pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6" satisfied condition "Succeeded or Failed" May 25 10:09:20.016: INFO: Trying to get logs from node v1.21-worker pod pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6 container test-container: STEP: delete the pod May 25 10:09:20.030: INFO: Waiting for pod pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6 to disappear May 25 10:09:20.033: INFO: Pod pod-7c7b5f61-44cd-4f5d-951e-6f8b0284c7f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:20.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1309" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:19.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 10:09:19.606: INFO: Waiting up to 5m0s for pod "pod-0d68ab36-de21-4d55-9506-c7a626f50c2a" in namespace "emptydir-5458" to be "Succeeded or Failed" May 25 10:09:19.608: INFO: Pod "pod-0d68ab36-de21-4d55-9506-c7a626f50c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406667ms May 25 10:09:21.612: INFO: Pod "pod-0d68ab36-de21-4d55-9506-c7a626f50c2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006351736s STEP: Saw pod success May 25 10:09:21.612: INFO: Pod "pod-0d68ab36-de21-4d55-9506-c7a626f50c2a" satisfied condition "Succeeded or Failed" May 25 10:09:21.615: INFO: Trying to get logs from node v1.21-worker2 pod pod-0d68ab36-de21-4d55-9506-c7a626f50c2a container test-container: STEP: delete the pod May 25 10:09:21.627: INFO: Waiting for pod pod-0d68ab36-de21-4d55-9506-c7a626f50c2a to disappear May 25 10:09:21.630: INFO: Pod pod-0d68ab36-de21-4d55-9506-c7a626f50c2a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:21.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5458" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":187,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:14.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc W0525 10:08:14.207058 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:08:14.207: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:08:14.210: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0525 10:08:20.246991 19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:09:22.267: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3346" for this suite. • [SLOW TEST:68.091 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:14.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:14.798: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 10:09:19.803: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset May 25 10:09:19.813: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet May 25 10:09:19.820: INFO: observed ReplicaSet test-rs in namespace replicaset-4626 with ReadyReplicas 1, AvailableReplicas 1 May 25 10:09:19.832: INFO: observed ReplicaSet test-rs in namespace replicaset-4626 with ReadyReplicas 1, AvailableReplicas 1 May 25 10:09:19.844: INFO: observed ReplicaSet test-rs in namespace replicaset-4626 with ReadyReplicas 1, AvailableReplicas 1 May 25 10:09:19.849: INFO: observed ReplicaSet test-rs in namespace replicaset-4626 with ReadyReplicas 1, AvailableReplicas 1 May 25 10:09:23.425: INFO: observed ReplicaSet test-rs in namespace replicaset-4626 with ReadyReplicas 2, AvailableReplicas 2 May 25 10:09:23.825: INFO: observed Replicaset test-rs in namespace replicaset-4626 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:23.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4626" for this suite. • [SLOW TEST:9.827 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":9,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:10.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:16.503: INFO: Deleting pod "var-expansion-ee9993e0-49f8-4af2-9634-0abff3c2753d" in namespace "var-expansion-9318" May 25 10:09:16.507: INFO: Wait up to 5m0s for pod "var-expansion-ee9993e0-49f8-4af2-9634-0abff3c2753d" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:26.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9318" for this suite. • [SLOW TEST:16.159 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":8,"skipped":147,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:26.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:26.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6037" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":9,"skipped":151,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:20.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 25 10:09:20.213: INFO: The status of Pod pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:22.216: INFO: The status of Pod pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 10:09:22.733: INFO: Successfully updated pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3" May 25 10:09:22.733: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3" in namespace "pods-3702" to be "terminated due to deadline exceeded" May 25 10:09:22.736: INFO: Pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3": Phase="Running", Reason="", readiness=true. Elapsed: 3.202507ms May 25 10:09:24.741: INFO: Pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007607965s May 25 10:09:26.745: INFO: Pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.012240034s May 25 10:09:26.745: INFO: Pod "pod-update-activedeadlineseconds-140baa87-93aa-49bb-ba4f-9a0965a902a3" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:26.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3702" for this suite. • [SLOW TEST:6.632 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":51,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:26.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:09:27.513: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:09:30.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:30.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-141-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:33.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6131" for this suite. STEP: Destroying namespace "webhook-6131-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.958 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:26.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running May 25 10:09:28.664: INFO: running pods: 0 < 3 May 25 10:09:30.673: INFO: running pods: 0 < 3 May 25 10:09:32.673: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:34.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9754" for this suite. • [SLOW TEST:8.189 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":10,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:34.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:36.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7730" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":11,"skipped":181,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":3,"skipped":71,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:24.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0525 10:08:34.971069 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:09:37.081: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:37.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9419" for this suite. • [SLOW TEST:72.384 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:30.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-83cc8777-4bd1-404c-b3dc-d7ffe5b64f14 STEP: Creating the pod May 25 10:08:31.038: INFO: The status of Pod pod-projected-configmaps-324f92bb-76a5-49f2-85a2-0ac2d13e7952 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:33.042: INFO: The status of Pod pod-projected-configmaps-324f92bb-76a5-49f2-85a2-0ac2d13e7952 is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:35.043: INFO: The status of Pod pod-projected-configmaps-324f92bb-76a5-49f2-85a2-0ac2d13e7952 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-83cc8777-4bd1-404c-b3dc-d7ffe5b64f14 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:40.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3146" for this suite. • [SLOW TEST:69.125 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":80,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:41.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0525 10:08:42.586915 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:09:44.607: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4889" for this suite. • [SLOW TEST:63.106 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":6,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:21.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:21.684: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 25 10:09:26.688: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 10:09:26.688: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 25 10:09:28.693: INFO: Creating deployment "test-rollover-deployment" May 25 10:09:28.701: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 25 10:09:30.709: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 25 10:09:30.715: INFO: Ensure that both replica sets have 1 created replica May 25 10:09:30.721: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 25 10:09:30.730: INFO: Updating deployment test-rollover-deployment May 25 10:09:30.730: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 25 10:09:32.737: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 25 10:09:32.744: INFO: Make sure deployment "test-rollover-deployment" is complete May 25 10:09:32.751: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:32.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534170, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:34.785: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:34.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534174, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:36.885: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:36.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534174, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:38.758: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:38.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534174, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:40.761: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:40.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534174, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:42.760: INFO: all replica sets need to contain the pod-template-hash label May 25 10:09:42.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534174, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534168, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:09:44.759: INFO: May 25 10:09:44.759: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:09:44.770: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8448 9520baf8-5c80-4470-82d9-175e88147284 493667 2 2021-05-25 10:09:28 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-25 10:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0054ee188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-25 10:09:28 +0000 UTC,LastTransitionTime:2021-05-25 10:09:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-05-25 10:09:44 +0000 UTC,LastTransitionTime:2021-05-25 10:09:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 10:09:44.774: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-8448 c2028971-6336-473a-8c41-86e51749c009 493656 2 2021-05-25 10:09:30 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9520baf8-5c80-4470-82d9-175e88147284 0xc0054ee6f0 0xc0054ee6f1}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9520baf8-5c80-4470-82d9-175e88147284\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0054ee768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 10:09:44.774: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 25 10:09:44.774: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8448 4666644e-d368-407d-a34d-9b0bb6959989 493666 2 2021-05-25 10:09:21 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9520baf8-5c80-4470-82d9-175e88147284 0xc0054ee4e7 0xc0054ee4e8}] [] [{e2e.test Update apps/v1 2021-05-25 10:09:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9520baf8-5c80-4470-82d9-175e88147284\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0054ee588 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:09:44.774: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-8448 8a0a3991-8dde-47bd-a35f-82e86e923473 493337 2 2021-05-25 10:09:28 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9520baf8-5c80-4470-82d9-175e88147284 0xc0054ee5f7 0xc0054ee5f8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9520baf8-5c80-4470-82d9-175e88147284\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0054ee688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:09:44.779: INFO: Pod "test-rollover-deployment-98c5f4599-2jdd7" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-2jdd7 test-rollover-deployment-98c5f4599- deployment-8448 fec8e766-d824-4e3e-a55a-d04b9bc69443 493442 0 2021-05-25 10:09:30 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.253" ], "mac": "d6:63:30:68:67:a1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.253" ], "mac": "d6:63:30:68:67:a1", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 c2028971-6336-473a-8c41-86e51749c009 0xc0054eec60 0xc0054eec61}] [] [{kube-controller-manager Update v1 2021-05-25 10:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2028971-6336-473a-8c41-86e51749c009\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:09:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:09:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.253\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wfzvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfzvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.253,StartTime:2021-05-25 10:09:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:09:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://dca6f63b93d6d253150be5c67861dbb51180bf7a2897caf08c5da4c80b4e26f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:44.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8448" for this suite. • [SLOW TEST:23.140 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":10,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:37.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 25 10:09:38.091: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 25 10:09:38.096: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 10:09:38.096: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 25 10:09:38.102: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 25 10:09:38.102: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 25 10:09:38.111: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 25 10:09:38.111: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 25 10:09:45.139: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:45.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-828" for this suite. • [SLOW TEST:7.819 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:40.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:40.149: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 25 10:09:45.154: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 10:09:45.154: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:09:45.175: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6089 9e6c716d-79f6-4127-9e17-69fdc95d6fb1 493690 1 2021-05-25 10:09:45 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-05-25 10:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003339988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 25 10:09:45.177: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-6089 2da529e1-be6a-4063-b368-ea227b834da4 493695 1 2021-05-25 10:09:45 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9e6c716d-79f6-4127-9e17-69fdc95d6fb1 0xc003339da7 0xc003339da8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9e6c716d-79f6-4127-9e17-69fdc95d6fb1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003339e38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:09:45.177: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 25 10:09:45.177: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6089 7e4f3322-f637-426e-a879-73d40fec44a6 493693 1 2021-05-25 10:09:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9e6c716d-79f6-4127-9e17-69fdc95d6fb1 0xc003339c97 0xc003339c98}] [] [{e2e.test Update apps/v1 2021-05-25 10:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"9e6c716d-79f6-4127-9e17-69fdc95d6fb1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003339d38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 10:09:45.181: INFO: Pod "test-cleanup-controller-c7br5" is available: &Pod{ObjectMeta:{test-cleanup-controller-c7br5 test-cleanup-controller- deployment-6089 3438a9b0-4f9c-4545-a5c2-5aa7e078e823 493614 0 2021-05-25 10:09:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.193" ], "mac": "de:b0:60:84:f4:1f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.193" ], "mac": "de:b0:60:84:f4:1f", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-cleanup-controller 7e4f3322-f637-426e-a879-73d40fec44a6 0xc003776387 0xc003776388}] [] [{kube-controller-manager Update v1 2021-05-25 10:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e4f3322-f637-426e-a879-73d40fec44a6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:09:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lqpsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqpsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:09:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.193,StartTime:2021-05-25 10:09:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:09:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://11f79ad9da95418375ffdc0ab7dd0ec6095653062bfe2a164002d665dccf9d90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:09:45.181: INFO: Pod "test-cleanup-deployment-5b4d99b59b-c2lg2" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-c2lg2 test-cleanup-deployment-5b4d99b59b- deployment-6089 c2eb63cf-554e-4586-ab27-bb41c4367015 493699 0 2021-05-25 10:09:45 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 2da529e1-be6a-4063-b368-ea227b834da4 0xc003776587 0xc003776588}] [] [{kube-controller-manager Update v1 2021-05-25 10:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2da529e1-be6a-4063-b368-ea227b834da4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t5jnv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5jnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:45.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6089" for this suite. • [SLOW TEST:5.071 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":4,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:36.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:37.085: INFO: Creating ReplicaSet my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c May 25 10:09:37.285: INFO: Pod name my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c: Found 0 pods out of 1 May 25 10:09:42.291: INFO: Pod name my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c: Found 1 pods out of 1 May 25 10:09:42.291: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c" is running May 25 10:09:42.295: INFO: Pod "my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c-8jkms" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:09:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:09:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:09:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:09:37 +0000 UTC Reason: Message:}]) May 25 10:09:42.295: INFO: Trying to dial the pod May 25 10:09:47.308: INFO: Controller my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c: Got expected result from replica 1 [my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c-8jkms]: "my-hostname-basic-1848cee1-f515-42f3-9a82-cb0226cae28c-8jkms", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:47.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4191" for this suite. • [SLOW TEST:10.726 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":12,"skipped":183,"failed":0} [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:47.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running May 25 10:09:49.384: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:51.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2528" for this suite. • ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:45.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 25 10:09:45.199: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:55.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6479" for this suite. • [SLOW TEST:9.906 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":104,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:23.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1593 STEP: creating service affinity-clusterip-transition in namespace services-1593 STEP: creating replication controller affinity-clusterip-transition in namespace services-1593 I0525 10:09:23.922272 35 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-1593, replica count: 3 I0525 10:09:26.973520 35 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:09:29.974174 35 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:09:29.980: INFO: Creating new exec pod May 25 10:09:37.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-1593 exec execpod-affinitykrg77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' May 25 10:09:37.653: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 25 10:09:37.653: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:09:37.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-1593 exec execpod-affinitykrg77 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.27.95 80' May 25 10:09:38.170: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.27.95 80\nConnection to 10.96.27.95 80 port [tcp/http] succeeded!\n" May 25 10:09:38.170: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:09:38.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-1593 exec execpod-affinitykrg77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.27.95:80/ ; done' May 25 10:09:38.515: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n" May 25 10:09:38.515: INFO: stdout: "\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-f2hzv\naffinity-clusterip-transition-9s9h2\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-9s9h2\naffinity-clusterip-transition-f2hzv\naffinity-clusterip-transition-f2hzv\naffinity-clusterip-transition-9s9h2\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-f2hzv\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-9s9h2\naffinity-clusterip-transition-9s9h2" May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-f2hzv May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-9s9h2 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-9s9h2 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-f2hzv May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-f2hzv May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-9s9h2 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-f2hzv May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-9s9h2 May 25 10:09:38.515: INFO: Received response from host: affinity-clusterip-transition-9s9h2 May 25 10:09:38.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-1593 exec execpod-affinitykrg77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.27.95:80/ ; done' May 25 10:09:38.915: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.27.95:80/\n" May 25 10:09:38.915: INFO: stdout: "\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68\naffinity-clusterip-transition-flh68" May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.915: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Received response from host: affinity-clusterip-transition-flh68 May 25 10:09:38.916: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-1593, will wait for the garbage collector to delete the pods May 25 10:09:38.981: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.337083ms May 25 10:09:39.082: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.660965ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:09:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1593" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:31.423 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:44.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:09:44.718: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:46.722: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:48.723: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:50.724: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:52.722: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:54.722: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:56.723: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:09:58.723: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:10:00.979: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:10:02.779: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:10:04.722: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = false) May 25 10:10:06.723: INFO: The status of Pod test-webserver-42935e2b-e887-4348-bd52-63525ddc8a6c is Running (Ready = true) May 25 10:10:06.727: INFO: Container started at 2021-05-25 10:09:45 +0000 UTC, pod became ready at 2021-05-25 10:10:04 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:06.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2328" for this suite. • [SLOW TEST:22.063 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":146,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:45.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-p2zz STEP: Creating a pod to test atomic-volume-subpath May 25 10:09:45.339: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p2zz" in namespace "subpath-3748" to be "Succeeded or Failed" May 25 10:09:45.342: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871972ms May 25 10:09:47.346: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 2.007164131s May 25 10:09:49.356: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 4.016916974s May 25 10:09:51.361: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 6.022042034s May 25 10:09:53.366: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 8.026698035s May 25 10:09:55.370: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 10.031120467s May 25 10:09:57.376: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 12.036301125s May 25 10:09:59.380: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 14.040919045s May 25 10:10:01.582: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 16.242470022s May 25 10:10:03.586: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 18.247137245s May 25 10:10:05.591: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Running", Reason="", readiness=true. Elapsed: 20.252140585s May 25 10:10:07.596: INFO: Pod "pod-subpath-test-configmap-p2zz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.256430682s STEP: Saw pod success May 25 10:10:07.596: INFO: Pod "pod-subpath-test-configmap-p2zz" satisfied condition "Succeeded or Failed" May 25 10:10:07.598: INFO: Trying to get logs from node v1.21-worker2 pod pod-subpath-test-configmap-p2zz container test-container-subpath-configmap-p2zz: STEP: delete the pod May 25 10:10:07.614: INFO: Waiting for pod pod-subpath-test-configmap-p2zz to disappear May 25 10:10:07.617: INFO: Pod pod-subpath-test-configmap-p2zz no longer exists STEP: Deleting pod pod-subpath-test-configmap-p2zz May 25 10:10:07.617: INFO: Deleting pod "pod-subpath-test-configmap-p2zz" in namespace "subpath-3748" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:07.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3748" for this suite. • [SLOW TEST:22.330 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:17.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-f8244888-cf08-436a-bda6-f474789580e5 in namespace container-probe-1137 May 25 10:09:19.223: INFO: Started pod busybox-f8244888-cf08-436a-bda6-f474789580e5 in namespace container-probe-1137 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:09:19.226: INFO: Initial restart count of pod busybox-f8244888-cf08-436a-bda6-f474789580e5 is 0 May 25 10:10:07.795: INFO: Restart count of pod container-probe-1137/busybox-f8244888-cf08-436a-bda6-f474789580e5 is now 1 (48.568694554s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:07.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1137" for this suite. • [SLOW TEST:50.627 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:55.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-4243 STEP: creating replication controller nodeport-test in namespace services-4243 I0525 10:09:55.411986 35 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4243, replica count: 2 I0525 10:09:58.463861 35 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:10:01.464308 35 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:10:04.465470 35 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:10:04.465: INFO: Creating new exec pod May 25 10:10:07.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-4243 exec execpod644r9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' May 25 10:10:07.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 25 10:10:07.693: INFO: stdout: "nodeport-test-jmmqg" May 25 10:10:07.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-4243 exec execpod644r9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.91.158 80' May 25 10:10:07.940: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.91.158 80\nConnection to 10.96.91.158 80 port [tcp/http] succeeded!\n" May 25 10:10:07.940: INFO: stdout: "nodeport-test-9qkpm" May 25 10:10:07.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-4243 exec execpod644r9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30415' May 25 10:10:08.194: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30415\nConnection to 172.18.0.4 30415 port [tcp/*] succeeded!\n" May 25 10:10:08.194: INFO: stdout: "nodeport-test-jmmqg" May 25 10:10:08.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-4243 exec execpod644r9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.2 30415' May 25 10:10:08.435: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.2 30415\nConnection to 172.18.0.2 30415 port [tcp/*] succeeded!\n" May 25 10:10:08.435: INFO: stdout: "nodeport-test-jmmqg" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:08.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4243" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.080 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":11,"skipped":223,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:06.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all May 25 10:10:06.801: INFO: Waiting up to 5m0s for pod "client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356" in namespace "containers-7952" to be "Succeeded or Failed" May 25 10:10:06.804: INFO: Pod "client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780031ms May 25 10:10:08.811: INFO: Pod "client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009746456s STEP: Saw pod success May 25 10:10:08.811: INFO: Pod "client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356" satisfied condition "Succeeded or Failed" May 25 10:10:08.814: INFO: Trying to get logs from node v1.21-worker2 pod client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356 container agnhost-container: STEP: delete the pod May 25 10:10:08.826: INFO: Waiting for pod client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356 to disappear May 25 10:10:08.829: INFO: Pod client-containers-24e0e89e-bc8e-428f-807e-9f5bb1172356 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:08.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7952" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:07.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 10:10:07.984: INFO: Waiting up to 5m0s for pod "pod-c0a60863-8467-46a7-a6c4-1bdac1f40574" in namespace "emptydir-6402" to be "Succeeded or Failed" May 25 10:10:07.987: INFO: Pod "pod-c0a60863-8467-46a7-a6c4-1bdac1f40574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.961069ms May 25 10:10:09.992: INFO: Pod "pod-c0a60863-8467-46a7-a6c4-1bdac1f40574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007617077s STEP: Saw pod success May 25 10:10:09.992: INFO: Pod "pod-c0a60863-8467-46a7-a6c4-1bdac1f40574" satisfied condition "Succeeded or Failed" May 25 10:10:09.995: INFO: Trying to get logs from node v1.21-worker2 pod pod-c0a60863-8467-46a7-a6c4-1bdac1f40574 container test-container: STEP: delete the pod May 25 10:10:10.009: INFO: Waiting for pod pod-c0a60863-8467-46a7-a6c4-1bdac1f40574 to disappear May 25 10:10:10.013: INFO: Pod pod-c0a60863-8467-46a7-a6c4-1bdac1f40574 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:10.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6402" for this suite. • ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:08.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:11.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4270" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":9,"skipped":195,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:11.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:11.878: INFO: The status of Pod busybox-host-aliases3d5c2c57-aa73-4121-9023-605af65c124d is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:13.881: INFO: The status of Pod busybox-host-aliases3d5c2c57-aa73-4121-9023-605af65c124d is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:13.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9694" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:55.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 25 10:09:55.123: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:57.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:59.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:01.279: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 25 10:10:01.781: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:03.786: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:05.787: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 10:10:05.803: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:05.807: INFO: Pod pod-with-poststart-exec-hook still exists May 25 10:10:07.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:07.811: INFO: Pod pod-with-poststart-exec-hook still exists May 25 10:10:09.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:09.812: INFO: Pod pod-with-poststart-exec-hook still exists May 25 10:10:11.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:11.811: INFO: Pod pod-with-poststart-exec-hook still exists May 25 10:10:13.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:13.811: INFO: Pod pod-with-poststart-exec-hook still exists May 25 10:10:15.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 10:10:15.811: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:15.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2969" for this suite. • [SLOW TEST:20.737 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":106,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":98,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:10.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-4522 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4522 to expose endpoints map[] May 25 10:10:10.067: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 25 10:10:11.076: INFO: successfully validated that service endpoint-test2 in namespace services-4522 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4522 May 25 10:10:11.086: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:13.090: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4522 to expose endpoints map[pod1:[80]] May 25 10:10:13.103: INFO: successfully validated that service endpoint-test2 in namespace services-4522 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-4522 May 25 10:10:13.111: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:15.114: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4522 to expose endpoints map[pod1:[80] pod2:[80]] May 25 10:10:15.129: INFO: successfully validated that service endpoint-test2 in namespace services-4522 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-4522 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4522 to expose endpoints map[pod2:[80]] May 25 10:10:15.148: INFO: successfully validated that service endpoint-test2 in namespace services-4522 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-4522 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4522 to expose endpoints map[] May 25 10:10:16.166: INFO: successfully validated that service endpoint-test2 in namespace services-4522 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:16.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4522" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:6.161 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":4,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:15.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 25 10:10:15.879: INFO: Waiting up to 5m0s for pod "downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5" in namespace "downward-api-302" to be "Succeeded or Failed" May 25 10:10:15.881: INFO: Pod "downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741104ms May 25 10:10:17.886: INFO: Pod "downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007428822s STEP: Saw pod success May 25 10:10:17.886: INFO: Pod "downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5" satisfied condition "Succeeded or Failed" May 25 10:10:17.889: INFO: Trying to get logs from node v1.21-worker pod downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5 container dapi-container: STEP: delete the pod May 25 10:10:17.906: INFO: Waiting for pod downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5 to disappear May 25 10:10:17.909: INFO: Pod downward-api-ea2ce17a-12de-4f99-9773-5a4a565944e5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:17.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-302" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:13.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:17.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5707" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":224,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:17.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:18.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3456" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:16.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:10:16.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a" in namespace "projected-9937" to be "Succeeded or Failed" May 25 10:10:16.377: INFO: Pod "downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451996ms May 25 10:10:18.381: INFO: Pod "downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007643075s STEP: Saw pod success May 25 10:10:18.381: INFO: Pod "downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a" satisfied condition "Succeeded or Failed" May 25 10:10:18.384: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a container client-container: STEP: delete the pod May 25 10:10:18.398: INFO: Waiting for pod downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a to disappear May 25 10:10:18.401: INFO: Pod downwardapi-volume-d9d7d9d2-e371-44af-af13-24cfedc12d3a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9937" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":12,"skipped":226,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:18.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 25 10:10:18.071: INFO: Waiting up to 5m0s for pod "security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d" in namespace "security-context-7735" to be "Succeeded or Failed" May 25 10:10:18.074: INFO: Pod "security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441285ms May 25 10:10:20.078: INFO: Pod "security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00636412s STEP: Saw pod success May 25 10:10:20.078: INFO: Pod "security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d" satisfied condition "Succeeded or Failed" May 25 10:10:20.081: INFO: Trying to get logs from node v1.21-worker pod security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d container test-container: STEP: delete the pod May 25 10:10:20.094: INFO: Waiting for pod security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d to disappear May 25 10:10:20.098: INFO: Pod security-context-0940557f-a521-45f9-a2ba-c0ded1961d3d no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:20.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7735" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":226,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:20.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events May 25 10:10:20.147: INFO: created test-event-1 May 25 10:10:20.151: INFO: created test-event-2 May 25 10:10:20.154: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 25 10:10:20.157: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 25 10:10:20.169: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:20.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1261" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":14,"skipped":233,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:07.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:07.695: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 25 10:10:12.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 create -f -' May 25 10:10:12.623: INFO: stderr: "" May 25 10:10:12.623: INFO: stdout: "e2e-test-crd-publish-openapi-7674-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 10:10:12.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 delete e2e-test-crd-publish-openapi-7674-crds test-foo' May 25 10:10:12.756: INFO: stderr: "" May 25 10:10:12.756: INFO: stdout: "e2e-test-crd-publish-openapi-7674-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 25 10:10:12.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 apply -f -' May 25 10:10:13.064: INFO: stderr: "" May 25 10:10:13.064: INFO: stdout: "e2e-test-crd-publish-openapi-7674-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 10:10:13.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 delete e2e-test-crd-publish-openapi-7674-crds test-foo' May 25 10:10:13.189: INFO: stderr: "" May 25 10:10:13.189: INFO: stdout: "e2e-test-crd-publish-openapi-7674-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 25 10:10:13.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 create -f -' May 25 10:10:13.472: INFO: rc: 1 May 25 10:10:13.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 apply -f -' May 25 10:10:13.758: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 25 10:10:13.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 create -f -' May 25 10:10:14.031: INFO: rc: 1 May 25 10:10:14.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 --namespace=crd-publish-openapi-9418 apply -f -' May 25 10:10:14.330: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 25 10:10:14.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 explain e2e-test-crd-publish-openapi-7674-crds' May 25 10:10:14.632: INFO: stderr: "" May 25 10:10:14.632: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7674-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 25 10:10:14.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 explain e2e-test-crd-publish-openapi-7674-crds.metadata' May 25 10:10:14.931: INFO: stderr: "" May 25 10:10:14.931: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7674-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 25 10:10:14.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 explain e2e-test-crd-publish-openapi-7674-crds.spec' May 25 10:10:15.242: INFO: stderr: "" May 25 10:10:15.242: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7674-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 25 10:10:15.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 explain e2e-test-crd-publish-openapi-7674-crds.spec.bars' May 25 10:10:15.543: INFO: stderr: "" May 25 10:10:15.543: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7674-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 25 10:10:15.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9418 explain e2e-test-crd-publish-openapi-7674-crds.spec.bars2' May 25 10:10:15.822: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9418" for this suite. • [SLOW TEST:12.722 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":6,"skipped":165,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:20.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 25 10:10:20.446: INFO: The status of Pod labelsupdate3de8f62c-0196-473c-bc28-6cfe0865a4d3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:22.451: INFO: The status of Pod labelsupdate3de8f62c-0196-473c-bc28-6cfe0865a4d3 is Running (Ready = true) May 25 10:10:22.972: INFO: Successfully updated pod "labelsupdate3de8f62c-0196-473c-bc28-6cfe0865a4d3" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:26.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3466" for this suite. • [SLOW TEST:6.601 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":176,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:08:55.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-7ddcdc86-e6f9-4c28-ad46-cc94e86f2c84 STEP: Creating configMap with name cm-test-opt-upd-1cbd1b39-b109-42bb-a440-8cddc4f4734a STEP: Creating the pod May 25 10:08:55.563: INFO: The status of Pod pod-projected-configmaps-28b054ae-c8cc-4ba8-8642-0ad4a15537db is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:57.567: INFO: The status of Pod pod-projected-configmaps-28b054ae-c8cc-4ba8-8642-0ad4a15537db is Pending, waiting for it to be Running (with Ready = true) May 25 10:08:59.568: INFO: The status of Pod pod-projected-configmaps-28b054ae-c8cc-4ba8-8642-0ad4a15537db is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-7ddcdc86-e6f9-4c28-ad46-cc94e86f2c84 STEP: Updating configmap cm-test-opt-upd-1cbd1b39-b109-42bb-a440-8cddc4f4734a STEP: Creating configMap with name cm-test-opt-create-b3744640-442c-4ca4-8729-6a4d4d640004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:29.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2512" for this suite. • [SLOW TEST:93.818 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:08.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-tgmv STEP: Creating a pod to test atomic-volume-subpath May 25 10:10:08.509: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tgmv" in namespace "subpath-5783" to be "Succeeded or Failed" May 25 10:10:08.512: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.658488ms May 25 10:10:10.518: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 2.008710774s May 25 10:10:12.522: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 4.013316723s May 25 10:10:14.527: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 6.017930857s May 25 10:10:16.532: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 8.022724282s May 25 10:10:18.537: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 10.027405605s May 25 10:10:20.541: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 12.032015333s May 25 10:10:22.546: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 14.036443402s May 25 10:10:24.551: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 16.041509121s May 25 10:10:26.556: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 18.046848943s May 25 10:10:28.561: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Running", Reason="", readiness=true. Elapsed: 20.051991583s May 25 10:10:30.566: INFO: Pod "pod-subpath-test-configmap-tgmv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056825137s STEP: Saw pod success May 25 10:10:30.566: INFO: Pod "pod-subpath-test-configmap-tgmv" satisfied condition "Succeeded or Failed" May 25 10:10:30.569: INFO: Trying to get logs from node v1.21-worker pod pod-subpath-test-configmap-tgmv container test-container-subpath-configmap-tgmv: STEP: delete the pod May 25 10:10:30.585: INFO: Waiting for pod pod-subpath-test-configmap-tgmv to disappear May 25 10:10:30.588: INFO: Pod pod-subpath-test-configmap-tgmv no longer exists STEP: Deleting pod pod-subpath-test-configmap-tgmv May 25 10:10:30.588: INFO: Deleting pod "pod-subpath-test-configmap-tgmv" in namespace "subpath-5783" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:30.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5783" for this suite. • [SLOW TEST:22.142 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:18.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:31.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7975" for this suite. • [SLOW TEST:13.103 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":225,"failed":0} SSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:17.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-w5n9m in namespace proxy-7795 I0525 10:10:18.009424 25 runners.go:190] Created replication controller with name: proxy-service-w5n9m, namespace: proxy-7795, replica count: 1 I0525 10:10:19.060675 25 runners.go:190] proxy-service-w5n9m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:10:20.061659 25 runners.go:190] proxy-service-w5n9m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0525 10:10:21.061859 25 runners.go:190] proxy-service-w5n9m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0525 10:10:22.062625 25 runners.go:190] proxy-service-w5n9m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:10:22.066: INFO: setup took 4.069479076s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 5.865745ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 6.366838ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 6.48511ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 6.408613ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 6.463053ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 6.714469ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 6.565372ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 6.462368ms) May 25 10:10:22.072: INFO: (0) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 6.684843ms) May 25 10:10:22.073: INFO: (0) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 7.403146ms) May 25 10:10:22.073: INFO: (0) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 7.340893ms) May 25 10:10:22.082: INFO: (0) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 4.333273ms) May 25 10:10:22.090: INFO: (1) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 7.92802ms) May 25 10:10:22.090: INFO: (1) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 7.942263ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 8.112021ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 8.312189ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 8.325866ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 8.510801ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 8.542895ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 8.486739ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 8.669239ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 8.707792ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 8.699882ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 8.761102ms) May 25 10:10:22.091: INFO: (1) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 4.172777ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.977699ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.936463ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.94721ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.072024ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.137582ms) May 25 10:10:22.096: INFO: (2) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 5.195557ms) May 25 10:10:22.097: INFO: (2) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 5.208466ms) May 25 10:10:22.097: INFO: (2) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 5.180236ms) May 25 10:10:22.097: INFO: (2) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 5.137243ms) May 25 10:10:22.098: INFO: (2) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 6.417565ms) May 25 10:10:22.098: INFO: (2) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 6.4628ms) May 25 10:10:22.098: INFO: (2) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 6.661909ms) May 25 10:10:22.098: INFO: (2) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 7.163792ms) May 25 10:10:22.098: INFO: (2) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 3.940115ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.091437ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.197297ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.51961ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.576295ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.565347ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.761062ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.700061ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 4.92087ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.893187ms) May 25 10:10:22.103: INFO: (3) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 3.647204ms) May 25 10:10:22.108: INFO: (4) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test (200; 4.577325ms) May 25 10:10:22.108: INFO: (4) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.714403ms) May 25 10:10:22.108: INFO: (4) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.595536ms) May 25 10:10:22.108: INFO: (4) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.66559ms) May 25 10:10:22.109: INFO: (4) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.93864ms) May 25 10:10:22.109: INFO: (4) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.894346ms) May 25 10:10:22.109: INFO: (4) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.877406ms) May 25 10:10:22.113: INFO: (5) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 3.923726ms) May 25 10:10:22.113: INFO: (5) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.179957ms) May 25 10:10:22.113: INFO: (5) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.427628ms) May 25 10:10:22.113: INFO: (5) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.492744ms) May 25 10:10:22.113: INFO: (5) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.581345ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.848934ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 4.998575ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.869417ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.957115ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.000847ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.969011ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.94786ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.925123ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.984852ms) May 25 10:10:22.114: INFO: (5) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.098458ms) May 25 10:10:22.117: INFO: (6) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 3.372663ms) May 25 10:10:22.118: INFO: (6) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.264476ms) May 25 10:10:22.118: INFO: (6) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.326324ms) May 25 10:10:22.118: INFO: (6) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.419438ms) May 25 10:10:22.118: INFO: (6) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.467796ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.65493ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.658717ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.667703ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.746857ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.693416ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.895694ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.017759ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.906684ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.024532ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 5.137035ms) May 25 10:10:22.119: INFO: (6) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 5.356614ms) May 25 10:10:22.124: INFO: (7) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 5.361652ms) May 25 10:10:22.124: INFO: (7) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 5.307683ms) May 25 10:10:22.124: INFO: (7) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 5.438127ms) May 25 10:10:22.125: INFO: (7) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 5.497061ms) May 25 10:10:22.125: INFO: (7) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 5.643664ms) May 25 10:10:22.125: INFO: (7) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.567014ms) May 25 10:10:22.125: INFO: (7) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 4.712844ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.971334ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.863002ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 5.119298ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 5.10702ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 5.121931ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.110648ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 5.216607ms) May 25 10:10:22.130: INFO: (8) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test (200; 3.463961ms) May 25 10:10:22.134: INFO: (9) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.307047ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.299257ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.369703ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.377888ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.388109ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.564724ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 4.754722ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.635249ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.660399ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.744954ms) May 25 10:10:22.135: INFO: (9) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.85117ms) May 25 10:10:22.139: INFO: (10) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 3.941779ms) May 25 10:10:22.139: INFO: (10) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.169759ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 4.461529ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.378363ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.500785ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.435103ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.395289ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.454933ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.621105ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.83761ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.786801ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.943316ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.878301ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.862952ms) May 25 10:10:22.140: INFO: (10) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 3.764357ms) May 25 10:10:22.145: INFO: (11) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.629159ms) May 25 10:10:22.145: INFO: (11) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.686495ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 5.062742ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 5.000607ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.988763ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.03953ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 5.138725ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 5.0792ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.237542ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.325677ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 5.210826ms) May 25 10:10:22.146: INFO: (11) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 5.441179ms) May 25 10:10:22.155: INFO: (12) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 8.425601ms) May 25 10:10:22.157: INFO: (12) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 11.324703ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 11.409009ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 11.377667ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 11.466791ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 11.584487ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 11.757186ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 11.932023ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 11.904469ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 12.012636ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 12.142349ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 12.179282ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 12.219987ms) May 25 10:10:22.158: INFO: (12) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 4.207411ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.281319ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.336382ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.403549ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.66832ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.697542ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 4.737754ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.805813ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.82704ms) May 25 10:10:22.163: INFO: (13) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test<... (200; 4.015445ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.29213ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.281177ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.362953ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.383407ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.404384ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.498243ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.407067ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 4.689839ms) May 25 10:10:22.168: INFO: (14) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.745582ms) May 25 10:10:22.172: INFO: (15) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 3.691708ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.116867ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.051452ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.079224ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.388694ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.401301ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.4925ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.566735ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.512916ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.529205ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.59631ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.685263ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.541305ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 4.721215ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 4.748621ms) May 25 10:10:22.173: INFO: (15) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test (200; 3.5995ms) May 25 10:10:22.177: INFO: (16) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 3.928734ms) May 25 10:10:22.177: INFO: (16) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 3.998238ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.103467ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.22073ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.270403ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.312753ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.397766ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.317958ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.446573ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.33841ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.458899ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.633501ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.68381ms) May 25 10:10:22.178: INFO: (16) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test (200; 4.311234ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.339022ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.312246ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.406929ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 4.439037ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 4.492862ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.517809ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 4.456408ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.43675ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 4.555542ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 4.687537ms) May 25 10:10:22.183: INFO: (17) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.77132ms) May 25 10:10:22.186: INFO: (18) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 3.233865ms) May 25 10:10:22.186: INFO: (18) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct/proxy/: test (200; 3.234176ms) May 25 10:10:22.186: INFO: (18) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 3.319357ms) May 25 10:10:22.187: INFO: (18) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: ... (200; 3.878887ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.449841ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.542002ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 4.534573ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.930476ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 5.076106ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 5.019103ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 4.959551ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 5.051514ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 5.099532ms) May 25 10:10:22.188: INFO: (18) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 5.126793ms) May 25 10:10:22.191: INFO: (19) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:462/proxy/: tls qux (200; 2.539167ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname2/proxy/: bar (200; 3.435429ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:443/proxy/: test (200; 3.732023ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 3.841641ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/https:proxy-service-w5n9m-nmvct:460/proxy/: tls baz (200; 3.756385ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:160/proxy/: foo (200; 3.882999ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:1080/proxy/: ... (200; 3.779277ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:1080/proxy/: test<... (200; 3.821919ms) May 25 10:10:22.192: INFO: (19) /api/v1/namespaces/proxy-7795/pods/http:proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.061797ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/pods/proxy-service-w5n9m-nmvct:162/proxy/: bar (200; 4.255659ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/services/http:proxy-service-w5n9m:portname1/proxy/: foo (200; 4.605343ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname1/proxy/: tls baz (200; 4.844179ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname2/proxy/: bar (200; 5.03006ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/services/proxy-service-w5n9m:portname1/proxy/: foo (200; 4.980496ms) May 25 10:10:22.193: INFO: (19) /api/v1/namespaces/proxy-7795/services/https:proxy-service-w5n9m:tlsportname2/proxy/: tls qux (200; 4.985582ms) STEP: deleting ReplicationController proxy-service-w5n9m in namespace proxy-7795, will wait for the garbage collector to delete the pods May 25 10:10:22.251: INFO: Deleting ReplicationController proxy-service-w5n9m took: 4.697315ms May 25 10:10:22.352: INFO: Terminating ReplicationController proxy-service-w5n9m pods took: 101.04043ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7795" for this suite. • [SLOW TEST:17.098 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":9,"skipped":141,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:35.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token May 25 10:10:35.631: INFO: created pod pod-service-account-defaultsa May 25 10:10:35.631: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 25 10:10:35.636: INFO: created pod pod-service-account-mountsa May 25 10:10:35.636: INFO: pod pod-service-account-mountsa service account token volume mount: true May 25 10:10:35.640: INFO: created pod pod-service-account-nomountsa May 25 10:10:35.640: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 25 10:10:35.643: INFO: created pod pod-service-account-defaultsa-mountspec May 25 10:10:35.643: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 25 10:10:35.647: INFO: created pod pod-service-account-mountsa-mountspec May 25 10:10:35.647: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 25 10:10:35.650: INFO: created pod pod-service-account-nomountsa-mountspec May 25 10:10:35.650: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 25 10:10:35.653: INFO: created pod pod-service-account-defaultsa-nomountspec May 25 10:10:35.653: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 25 10:10:35.656: INFO: created pod pod-service-account-mountsa-nomountspec May 25 10:10:35.656: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 25 10:10:35.659: INFO: created pod pod-service-account-nomountsa-nomountspec May 25 10:10:35.659: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:35.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1717" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":10,"skipped":147,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:27.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4112 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4112 STEP: creating replication controller externalsvc in namespace services-4112 I0525 10:10:27.072913 21 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4112, replica count: 2 I0525 10:10:30.124313 21 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 25 10:10:30.143: INFO: Creating new exec pod May 25 10:10:32.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-4112 exec execpodkzj5h -- /bin/sh -x -c nslookup nodeport-service.services-4112.svc.cluster.local' May 25 10:10:32.403: INFO: stderr: "+ nslookup nodeport-service.services-4112.svc.cluster.local\n" May 25 10:10:32.403: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4112.svc.cluster.local\tcanonical name = externalsvc.services-4112.svc.cluster.local.\nName:\texternalsvc.services-4112.svc.cluster.local\nAddress: 10.96.199.173\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4112, will wait for the garbage collector to delete the pods May 25 10:10:32.463: INFO: Deleting ReplicationController externalsvc took: 5.485971ms May 25 10:10:32.563: INFO: Terminating ReplicationController externalsvc pods took: 100.187183ms May 25 10:10:40.774: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:40.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4112" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.784 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":8,"skipped":177,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:40.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:41.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6202" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":9,"skipped":180,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:30.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 25 10:10:30.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2871 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 25 10:10:30.800: INFO: stderr: "" May 25 10:10:30.800: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 25 10:10:35.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2871 get pod e2e-test-httpd-pod -o json' May 25 10:10:35.969: INFO: stderr: "" May 25 10:10:35.969: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.2.206\\\"\\n ],\\n \\\"mac\\\": \\\"46:ce:38:91:d3:6d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.2.206\\\"\\n ],\\n \\\"mac\\\": \\\"46:ce:38:91:d3:6d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"creationTimestamp\": \"2021-05-25T10:10:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2871\",\n \"resourceVersion\": \"495028\",\n \"uid\": \"dc443146-8407-4c1f-9358-3b9fa9698ea2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-ln5g7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"v1.21-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-ln5g7\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T10:10:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T10:10:31Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T10:10:31Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T10:10:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://836d846b9f74a74e2cada3efc7d752437b790f1ae1ab4ce7b4b4543f4d7ffb26\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T10:10:31Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.206\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.206\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-05-25T10:10:30Z\"\n }\n}\n" STEP: replace the image in the pod May 25 10:10:35.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2871 replace -f -' May 25 10:10:36.375: INFO: stderr: "" May 25 10:10:36.375: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 May 25 10:10:36.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2871 delete pods e2e-test-httpd-pod' May 25 10:10:46.788: INFO: stderr: "" May 25 10:10:46.788: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:46.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2871" for this suite. • [SLOW TEST:16.347 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":13,"skipped":247,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:41.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:10:41.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea" in namespace "projected-5979" to be "Succeeded or Failed" May 25 10:10:41.276: INFO: Pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.70049ms May 25 10:10:43.379: INFO: Pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105529335s May 25 10:10:45.383: INFO: Pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109745135s May 25 10:10:47.480: INFO: Pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.20629227s STEP: Saw pod success May 25 10:10:47.480: INFO: Pod "downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea" satisfied condition "Succeeded or Failed" May 25 10:10:47.584: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea container client-container: STEP: delete the pod May 25 10:10:47.691: INFO: Waiting for pod downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea to disappear May 25 10:10:47.693: INFO: Pod downwardapi-volume-84242d68-ac27-454f-b9e8-a99516e3f8ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:47.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5979" for this suite. • [SLOW TEST:6.467 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:35.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:35.707: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 10:10:39.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4462 --namespace=crd-publish-openapi-4462 create -f -' May 25 10:10:40.813: INFO: stderr: "" May 25 10:10:40.813: INFO: stdout: "e2e-test-crd-publish-openapi-9344-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 25 10:10:40.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4462 --namespace=crd-publish-openapi-4462 delete e2e-test-crd-publish-openapi-9344-crds test-cr' May 25 10:10:41.188: INFO: stderr: "" May 25 10:10:41.188: INFO: stdout: "e2e-test-crd-publish-openapi-9344-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 25 10:10:41.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4462 --namespace=crd-publish-openapi-4462 apply -f -' May 25 10:10:42.200: INFO: stderr: "" May 25 10:10:42.200: INFO: stdout: "e2e-test-crd-publish-openapi-9344-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 25 10:10:42.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4462 --namespace=crd-publish-openapi-4462 delete e2e-test-crd-publish-openapi-9344-crds test-cr' May 25 10:10:42.985: INFO: stderr: "" May 25 10:10:42.985: INFO: stdout: "e2e-test-crd-publish-openapi-9344-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 25 10:10:42.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4462 explain e2e-test-crd-publish-openapi-9344-crds' May 25 10:10:43.615: INFO: stderr: "" May 25 10:10:43.615: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9344-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:47.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4462" for this suite. • [SLOW TEST:12.242 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":155,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:20.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:20.246: INFO: created pod May 25 10:10:20.246: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7187" to be "Succeeded or Failed" May 25 10:10:20.249: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.049869ms May 25 10:10:22.254: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00725153s STEP: Saw pod success May 25 10:10:22.254: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" May 25 10:10:52.254: INFO: polling logs May 25 10:10:52.261: INFO: Pod logs: 2021/05/25 10:10:21 OK: Got token 2021/05/25 10:10:21 validating with in-cluster discovery 2021/05/25 10:10:21 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/05/25 10:10:21 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7187:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1621938020, NotBefore:1621937420, IssuedAt:1621937420, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7187", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"1123203a-f2cc-4ce8-bb37-9b224f690cb4"}}} 2021/05/25 10:10:21 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/05/25 10:10:21 OK: Validated signature on JWT 2021/05/25 10:10:21 OK: Got valid claims from token! 2021/05/25 10:10:21 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-7187:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1621938020, NotBefore:1621937420, IssuedAt:1621937420, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-7187", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"1123203a-f2cc-4ce8-bb37-9b224f690cb4"}}} May 25 10:10:52.261: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:52.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7187" for this suite. • [SLOW TEST:32.074 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":15,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:47.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 10:10:47.704: INFO: Waiting up to 5m0s for pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43" in namespace "emptydir-9693" to be "Succeeded or Failed" May 25 10:10:47.706: INFO: Pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102485ms May 25 10:10:49.719: INFO: Pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014824726s May 25 10:10:51.724: INFO: Pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019617127s May 25 10:10:53.728: INFO: Pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023990663s STEP: Saw pod success May 25 10:10:53.728: INFO: Pod "pod-aee32d96-d926-4c4c-9c48-769736d9fc43" satisfied condition "Succeeded or Failed" May 25 10:10:53.732: INFO: Trying to get logs from node v1.21-worker pod pod-aee32d96-d926-4c4c-9c48-769736d9fc43 container test-container: STEP: delete the pod May 25 10:10:53.744: INFO: Waiting for pod pod-aee32d96-d926-4c4c-9c48-769736d9fc43 to disappear May 25 10:10:53.747: INFO: Pod pod-aee32d96-d926-4c4c-9c48-769736d9fc43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:53.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9693" for this suite. • [SLOW TEST:6.668 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":253,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:47.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:47.980: INFO: The status of Pod busybox-readonly-fs592b4123-7c28-4ccf-8fa3-afc02b6a8b53 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:49.984: INFO: The status of Pod busybox-readonly-fs592b4123-7c28-4ccf-8fa3-afc02b6a8b53 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:51.984: INFO: The status of Pod busybox-readonly-fs592b4123-7c28-4ccf-8fa3-afc02b6a8b53 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:53.984: INFO: The status of Pod busybox-readonly-fs592b4123-7c28-4ccf-8fa3-afc02b6a8b53 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4425" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":163,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":13,"skipped":183,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:51.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3969 May 25 10:09:51.484: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 25 10:09:53.489: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 25 10:09:53.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 25 10:09:53.767: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 25 10:09:53.767: INFO: stdout: "iptables" May 25 10:09:53.767: INFO: proxyMode: iptables May 25 10:09:53.777: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 10:09:53.780: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3969 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3969 I0525 10:09:53.793946 29 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3969, replica count: 3 I0525 10:09:56.845156 29 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:09:56.850: INFO: Creating new exec pod May 25 10:10:04.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' May 25 10:10:04.446: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 25 10:10:04.446: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:10:04.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.23.113 80' May 25 10:10:04.718: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.23.113 80\nConnection to 10.96.23.113 80 port [tcp/http] succeeded!\n" May 25 10:10:04.718: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:10:04.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.23.113:80/ ; done' May 25 10:10:05.082: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n" May 25 10:10:05.082: INFO: stdout: "\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf\naffinity-clusterip-timeout-wjgzf" May 25 10:10:05.082: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.082: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.082: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.082: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Received response from host: affinity-clusterip-timeout-wjgzf May 25 10:10:05.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.23.113:80/' May 25 10:10:05.293: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n" May 25 10:10:05.293: INFO: stdout: "affinity-clusterip-timeout-wjgzf" May 25 10:10:25.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.23.113:80/' May 25 10:10:25.570: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n" May 25 10:10:25.570: INFO: stdout: "affinity-clusterip-timeout-wjgzf" May 25 10:10:45.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3969 exec execpod-affinitysl77h -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.23.113:80/' May 25 10:10:46.082: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.23.113:80/\n" May 25 10:10:46.175: INFO: stdout: "affinity-clusterip-timeout-pd6ml" May 25 10:10:46.175: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3969, will wait for the garbage collector to delete the pods May 25 10:10:46.980: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 149.63246ms May 25 10:10:47.480: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.351506ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:55.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3969" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:64.061 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:53.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:10:53.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409" in namespace "projected-9153" to be "Succeeded or Failed" May 25 10:10:53.805: INFO: Pod "downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966126ms May 25 10:10:55.810: INFO: Pod "downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007873203s STEP: Saw pod success May 25 10:10:55.810: INFO: Pod "downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409" satisfied condition "Succeeded or Failed" May 25 10:10:55.817: INFO: Trying to get logs from node v1.21-worker2 pod downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409 container client-container: STEP: delete the pod May 25 10:10:55.833: INFO: Waiting for pod downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409 to disappear May 25 10:10:55.835: INFO: Pod downwardapi-volume-67dc4992-e854-4282-98c7-e88011615409 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:55.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9153" for this suite. • ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:52.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container May 25 10:10:56.391: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5289 PodName:pod-sharedvolume-fa6dbc47-b684-4dc9-9812-7ca236ea6135 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:10:56.391: INFO: >>> kubeConfig: /root/.kube/config May 25 10:10:56.482: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:56.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5289" for this suite. • ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:54.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-7099/secret-test-7966ea72-3e01-40df-ab3b-5126483cc0f5 STEP: Creating a pod to test consume secrets May 25 10:10:54.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d" in namespace "secrets-7099" to be "Succeeded or Failed" May 25 10:10:54.071: INFO: Pod "pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.883964ms May 25 10:10:56.076: INFO: Pod "pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112435s May 25 10:10:58.079: INFO: Pod "pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010583221s STEP: Saw pod success May 25 10:10:58.079: INFO: Pod "pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d" satisfied condition "Succeeded or Failed" May 25 10:10:58.082: INFO: Trying to get logs from node v1.21-worker pod pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d container env-test: STEP: delete the pod May 25 10:10:58.096: INFO: Waiting for pod pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d to disappear May 25 10:10:58.099: INFO: Pod pod-configmaps-b35a40a2-233b-4869-93d0-fb080623981d no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:58.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7099" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":171,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":16,"skipped":279,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:56.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:56.525: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:10:59.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9675" for this suite. • ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:44.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0525 10:09:44.887478 32 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3203" for this suite. • [SLOW TEST:76.156 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":11,"skipped":221,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:31.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3757 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 10:10:31.652: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 10:10:31.675: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:33.679: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:35.679: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:37.679: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:39.679: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:41.978: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:43.780: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:45.778: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:47.681: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:49.678: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:51.680: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:53.679: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:10:55.679: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 10:10:55.685: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 10:11:01.714: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 25 10:11:01.714: INFO: Going to poll 10.244.1.27 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 25 10:11:01.716: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.27:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3757 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:11:01.716: INFO: >>> kubeConfig: /root/.kube/config May 25 10:11:01.839: INFO: Found all 1 expected endpoints: [netserver-0] May 25 10:11:01.839: INFO: Going to poll 10.244.2.207 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 25 10:11:01.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.207:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3757 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:11:01.842: INFO: >>> kubeConfig: /root/.kube/config May 25 10:11:01.958: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:01.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3757" for this suite. • [SLOW TEST:30.350 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:29.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled May 25 10:10:29.419: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:31.423: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled May 25 10:10:31.432: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:33.437: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:35.441: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:37.435: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:39.479: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:41.485: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:43.497: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:45.477: INFO: The status of Pod pod2 is Running (Ready = false) May 25 10:10:47.480: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides May 25 10:10:47.680: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:49.684: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:51.685: INFO: The status of Pod pod3 is Running (Ready = false) May 25 10:10:53.684: INFO: The status of Pod pod3 is Running (Ready = true) May 25 10:10:53.691: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:55.695: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 25 10:10:57.694: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 May 25 10:10:57.697: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.4 http://127.0.0.1:54323/hostname] Namespace:hostport-6892 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:10:57.697: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 May 25 10:10:57.821: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.4:54323/hostname] Namespace:hostport-6892 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:10:57.821: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 UDP May 25 10:10:57.927: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.4 54323] Namespace:hostport-6892 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:10:57.927: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-6892" for this suite. • [SLOW TEST:33.677 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":113,"failed":0} S ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:55.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 25 10:10:55.569: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:03.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-742" for this suite. • [SLOW TEST:7.915 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":15,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:02.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-e7d8b68d-98b5-4cd3-9319-09ef76e1c211 STEP: Creating a pod to test consume secrets May 25 10:11:02.046: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2" in namespace "projected-5139" to be "Succeeded or Failed" May 25 10:11:02.049: INFO: Pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074947ms May 25 10:11:04.054: INFO: Pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007620383s May 25 10:11:06.058: INFO: Pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011642349s May 25 10:11:08.062: INFO: Pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016196579s STEP: Saw pod success May 25 10:11:08.063: INFO: Pod "pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2" satisfied condition "Succeeded or Failed" May 25 10:11:08.065: INFO: Trying to get logs from node v1.21-worker pod pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2 container projected-secret-volume-test: STEP: delete the pod May 25 10:11:08.076: INFO: Waiting for pod pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2 to disappear May 25 10:11:08.078: INFO: Pod pod-projected-secrets-ee2f037b-52e2-457a-8f4f-4a9aa29151f2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:08.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5139" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:01.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-031ceffe-60bc-4712-ae81-207d8adddb31 STEP: Creating a pod to test consume configMaps May 25 10:11:01.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f" in namespace "configmap-7678" to be "Succeeded or Failed" May 25 10:11:01.052: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440293ms May 25 10:11:03.056: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006569163s May 25 10:11:05.060: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010817247s May 25 10:11:07.064: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014534253s May 25 10:11:09.068: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018738392s STEP: Saw pod success May 25 10:11:09.068: INFO: Pod "pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f" satisfied condition "Succeeded or Failed" May 25 10:11:09.073: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f container agnhost-container: STEP: delete the pod May 25 10:11:09.087: INFO: Waiting for pod pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f to disappear May 25 10:11:09.089: INFO: Pod pod-configmaps-4037114a-afd9-470b-8c25-6fabc697db7f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:09.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7678" for this suite. • [SLOW TEST:8.081 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":226,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:03.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:11:03.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93" in namespace "downward-api-3459" to be "Succeeded or Failed" May 25 10:11:03.102: INFO: Pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071684ms May 25 10:11:05.106: INFO: Pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007603423s May 25 10:11:07.110: INFO: Pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011810014s May 25 10:11:09.115: INFO: Pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016245716s STEP: Saw pod success May 25 10:11:09.115: INFO: Pod "downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93" satisfied condition "Succeeded or Failed" May 25 10:11:09.118: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93 container client-container: STEP: delete the pod May 25 10:11:09.133: INFO: Waiting for pod downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93 to disappear May 25 10:11:09.136: INFO: Pod downwardapi-volume-c48b1087-484b-4446-90fe-bcf051aa6f93 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:09.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3459" for this suite. • [SLOW TEST:6.095 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":114,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:58.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:10:58.168: INFO: Creating deployment "webserver-deployment" May 25 10:10:58.172: INFO: Waiting for observed generation 1 May 25 10:11:00.178: INFO: Waiting for all required pods to come up May 25 10:11:00.183: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 25 10:11:10.191: INFO: Waiting for deployment "webserver-deployment" to complete May 25 10:11:10.198: INFO: Updating deployment "webserver-deployment" with a non-existent image May 25 10:11:10.208: INFO: Updating deployment webserver-deployment May 25 10:11:10.208: INFO: Waiting for observed generation 2 May 25 10:11:12.215: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 25 10:11:12.218: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 25 10:11:12.221: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 10:11:12.231: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 25 10:11:12.231: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 25 10:11:12.234: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 10:11:12.240: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 25 10:11:12.240: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 25 10:11:12.250: INFO: Updating deployment webserver-deployment May 25 10:11:12.250: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 25 10:11:12.255: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 25 10:11:12.258: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:11:14.267: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8411 bb301dd3-7c13-4f99-98fc-790195673124 496414 3 2021-05-25 10:10:58 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0073aac88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-05-25 10:11:12 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-05-25 10:11:12 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 25 10:11:14.270: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8411 30032914-7695-46a7-98e8-8d7e858d5460 496411 3 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bb301dd3-7c13-4f99-98fc-790195673124 0xc0073ab0a7 0xc0073ab0a8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb301dd3-7c13-4f99-98fc-790195673124\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0073ab138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:11:14.270: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 25 10:11:14.270: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-8411 dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 496403 3 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bb301dd3-7c13-4f99-98fc-790195673124 0xc0073ab1a7 0xc0073ab1a8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb301dd3-7c13-4f99-98fc-790195673124\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0073ab228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 25 10:11:14.279: INFO: Pod "webserver-deployment-795d758f88-2k4rj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2k4rj webserver-deployment-795d758f88- deployment-8411 0b4ca45a-fe5d-4f7f-8782-531b88fe9e12 496373 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e2577 0xc0073e2578}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t2ckz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t2ckz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.279: INFO: Pod "webserver-deployment-795d758f88-4vcvf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4vcvf webserver-deployment-795d758f88- deployment-8411 4e41bff6-8d16-4d38-a2d5-bcbdc83bf823 496513 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.53" ], "mac": "be:22:9e:8a:7e:a5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.53" ], "mac": "be:22:9e:8a:7e:a5", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e26f0 0xc0073e26f1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xzlqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzlqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.280: INFO: Pod "webserver-deployment-795d758f88-55fs9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-55fs9 webserver-deployment-795d758f88- deployment-8411 4a69bf8d-b8e9-4b7e-96e4-a58d5856e9e6 496396 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e2910 0xc0073e2911}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bg44l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bg44l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.280: INFO: Pod "webserver-deployment-795d758f88-7m989" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7m989 webserver-deployment-795d758f88- deployment-8411 48c61109-b4a0-4b0b-b4db-ccd4b7eb771e 496384 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e2ab0 0xc0073e2ab1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6cq6k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cq6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.280: INFO: Pod "webserver-deployment-795d758f88-8drh6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8drh6 webserver-deployment-795d758f88- deployment-8411 1633921e-9302-4a05-bdce-bcb721751adc 496331 0 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.49" ], "mac": "e2:a5:44:f1:d3:73", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.49" ], "mac": "e2:a5:44:f1:d3:73", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e2c70 0xc0073e2c71}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xkfz8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkfz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.281: INFO: Pod "webserver-deployment-795d758f88-8pllm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8pllm webserver-deployment-795d758f88- deployment-8411 f2f632d2-2079-47ef-9025-aec131c9afae 496323 0 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.48" ], "mac": "d6:a1:44:d0:1d:56", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.48" ], "mac": "d6:a1:44:d0:1d:56", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e2e50 0xc0073e2e51}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d278x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d278x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.281: INFO: Pod "webserver-deployment-795d758f88-9wbkh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9wbkh webserver-deployment-795d758f88- deployment-8411 15ebaa35-608f-4ace-98ec-95ff9e9fe478 496494 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.221" ], "mac": "ae:1c:7d:62:a0:e7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.221" ], "mac": "ae:1c:7d:62:a0:e7", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e3040 0xc0073e3041}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p9qfm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9qfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.282: INFO: Pod "webserver-deployment-795d758f88-f84bd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-f84bd webserver-deployment-795d758f88- deployment-8411 f8e38bca-3bea-45b9-ae3a-4632230710da 496375 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e3220 0xc0073e3221}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4842g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4842g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.282: INFO: Pod "webserver-deployment-795d758f88-jtkl7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jtkl7 webserver-deployment-795d758f88- deployment-8411 727fa511-6167-4ae4-be95-76e051d5c26d 496324 0 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.220" ], "mac": "f2:53:4f:72:79:6c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.220" ], "mac": "f2:53:4f:72:79:6c", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e33e0 0xc0073e33e1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rb2m7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rb2m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-05-25 10:11:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.283: INFO: Pod "webserver-deployment-795d758f88-lndqm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lndqm webserver-deployment-795d758f88- deployment-8411 bbabbd4f-5bca-4999-9473-a3f8bc3865bb 496509 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.223" ], "mac": "d2:3b:82:f4:81:3f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.223" ], "mac": "d2:3b:82:f4:81:3f", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e35f0 0xc0073e35f1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5mxtj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mxtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.283: INFO: Pod "webserver-deployment-795d758f88-s6qb2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-s6qb2 webserver-deployment-795d758f88- deployment-8411 30ff6af0-7dd6-4c22-9360-573108627aee 496502 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.52" ], "mac": "f2:0b:4e:e9:b7:d5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.52" ], "mac": "f2:0b:4e:e9:b7:d5", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e3780 0xc0073e3781}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4mrbr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4mrbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.284: INFO: Pod "webserver-deployment-795d758f88-t8lqw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t8lqw webserver-deployment-795d758f88- deployment-8411 c7ee8257-6eb1-439c-bc37-bb67af2c610f 496335 0 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.219" ], "mac": "46:3d:f9:cb:fe:bd", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.219" ], "mac": "46:3d:f9:cb:fe:bd", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e3910 0xc0073e3911}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.219\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xkn4s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkn4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.219,StartTime:2021-05-25 10:11:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.284: INFO: Pod "webserver-deployment-795d758f88-zqknx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zqknx webserver-deployment-795d758f88- deployment-8411 08243f12-80e9-479f-99ea-9f0e1a66d68c 496320 0 2021-05-25 10:11:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.47" ], "mac": "7a:43:2d:46:71:7e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.47" ], "mac": "7a:43:2d:46:71:7e", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 30032914-7695-46a7-98e8-8d7e858d5460 0xc0073e3ba0 0xc0073e3ba1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30032914-7695-46a7-98e8-8d7e858d5460\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4v7ln,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4v7ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.285: INFO: Pod "webserver-deployment-847dcfb7fb-2c29p" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2c29p webserver-deployment-847dcfb7fb- deployment-8411 f04f793b-fb49-48ea-a48d-ddc248d31061 496051 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.38" ], "mac": "ca:21:89:87:34:27", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.38" ], "mac": "ca:21:89:87:34:27", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0073e3d30 0xc0073e3d31}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c8cnh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8cnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.38,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:10:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://dce34dc2d4302d29940bd0509594d8e1fc271409df0c3eff0a88be67bf155c1b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.285: INFO: Pod "webserver-deployment-847dcfb7fb-4k6cs" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4k6cs webserver-deployment-847dcfb7fb- deployment-8411 f28b9db8-3972-4fe5-af95-b30b5a4e785b 496361 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0073e3f20 0xc0073e3f21}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k85mk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k85mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.285: INFO: Pod "webserver-deployment-847dcfb7fb-58tf6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-58tf6 webserver-deployment-847dcfb7fb- deployment-8411 dc145505-a610-4f62-a71c-f4d6e6faebff 496372 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007438080 0xc007438081}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bbfcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.285: INFO: Pod "webserver-deployment-847dcfb7fb-67jgd" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-67jgd webserver-deployment-847dcfb7fb- deployment-8411 756069f9-91ec-4fb5-96d7-c5b3b854949f 496496 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.222" ], "mac": "c2:9d:a3:93:db:cc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.222" ], "mac": "c2:9d:a3:93:db:cc", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0074381e0 0xc0074381e1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jfg2x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfg2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.286: INFO: Pod "webserver-deployment-847dcfb7fb-9mvp8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9mvp8 webserver-deployment-847dcfb7fb- deployment-8411 247e6bce-cff4-43e7-8772-3214d8b8fb89 496394 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0074383a0 0xc0074383a1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vfqbl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfqbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.286: INFO: Pod "webserver-deployment-847dcfb7fb-dmctb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dmctb webserver-deployment-847dcfb7fb- deployment-8411 c9c8e092-aea9-4bd3-8d0a-b26d37f73541 496489 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.50" ], "mac": "b6:95:ed:a1:db:fc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.50" ], "mac": "b6:95:ed:a1:db:fc", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007438510 0xc007438511}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fk667,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk667,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.286: INFO: Pod "webserver-deployment-847dcfb7fb-dtz6n" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-dtz6n webserver-deployment-847dcfb7fb- deployment-8411 0683fac7-1f85-4b90-a72a-265775b64241 496132 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.40" ], "mac": "ae:e9:bf:a8:fe:f9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.40" ], "mac": "ae:e9:bf:a8:fe:f9", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0074386f0 0xc0074386f1}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qrk4k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrk4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.40,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:11:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8119d87cba8d5ffd0581eb48813e4094d1bfb259b1da20ace58f7346207bad21,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.287: INFO: Pod "webserver-deployment-847dcfb7fb-g2jxr" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-g2jxr webserver-deployment-847dcfb7fb- deployment-8411 d326e6bb-3549-44c9-82a0-d7bdb34ef99e 496116 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.214" ], "mac": "52:89:31:0f:01:a1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.214" ], "mac": "52:89:31:0f:01:a1", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007438940 0xc007438941}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9gc9d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gc9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.214,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:11:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://b1b95d30ac0fa4651c9753cc478cb51086b240550a0b0addf8284c19b410f28d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.287: INFO: Pod "webserver-deployment-847dcfb7fb-gr8zc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gr8zc webserver-deployment-847dcfb7fb- deployment-8411 b6b8dbdf-7ef8-49da-91a7-2fe4f09e6e69 496032 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.37" ], "mac": "3a:90:90:f4:c1:e6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.37" ], "mac": "3a:90:90:f4:c1:e6", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007438ba0 0xc007438ba1}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vczd8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vczd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.37,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:10:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8fc4952e5071ae2ee73aa3c6c36f919a753cea6bfefd31c451bd3eff004dbee7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.287: INFO: Pod "webserver-deployment-847dcfb7fb-jxt6j" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jxt6j webserver-deployment-847dcfb7fb- deployment-8411 8bb15e39-b1cc-42f5-99bd-218800caa0a3 496012 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.212" ], "mac": "5a:b9:86:1c:1c:50", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.212" ], "mac": "5a:b9:86:1c:1c:50", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007438df0 0xc007438df1}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.212\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vfcjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vfcjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.212,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:10:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://b4be452b3735a1ab58d33d65e9fc80f46405ae1c670cb38d3a26e170e9c8a0a4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.288: INFO: Pod "webserver-deployment-847dcfb7fb-kdgg6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kdgg6 webserver-deployment-847dcfb7fb- deployment-8411 fae7e0de-0a8e-40ed-85f5-fdf037fde413 496381 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439030 0xc007439031}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rwg6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rwg6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.288: INFO: Pod "webserver-deployment-847dcfb7fb-kdt4w" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kdt4w webserver-deployment-847dcfb7fb- deployment-8411 d4a7482d-a85e-4431-8a08-a746bb249abd 496369 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439190 0xc007439191}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9nthj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9nthj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.289: INFO: Pod "webserver-deployment-847dcfb7fb-mlqbv" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-mlqbv webserver-deployment-847dcfb7fb- deployment-8411 8304c727-647f-4b7a-8fa0-95d16d7f4a6e 496493 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.51" ], "mac": "c6:ea:11:3a:b8:0b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.51" ], "mac": "c6:ea:11:3a:b8:0b", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0074392f0 0xc0074392f1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:11:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6q9gx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6q9gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.289: INFO: Pod "webserver-deployment-847dcfb7fb-nknbp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nknbp webserver-deployment-847dcfb7fb- deployment-8411 bca4bf64-0aa2-4c1b-bbca-93d808a85786 496046 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.39" ], "mac": "f6:cd:6a:e5:b3:bb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.39" ], "mac": "f6:cd:6a:e5:b3:bb", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439470 0xc007439471}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tvzmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvzmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.39,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:11:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://98441025edcb6c1fe9f864fcad345845bcb2ef924696268a210f67aa185fe108,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.293: INFO: Pod "webserver-deployment-847dcfb7fb-tjg22" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tjg22 webserver-deployment-847dcfb7fb- deployment-8411 91ad33b2-62ed-4d14-872f-d1bc586ddce7 496081 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.215" ], "mac": "a6:f1:6f:b3:0f:91", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.215" ], "mac": "a6:f1:6f:b3:0f:91", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439660 0xc007439661}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vbtbk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbtbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.215,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:11:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8d7f1f4c0c96d9b5b2d40710ad7956fe4beb054fc38defb2ee11524db21d587e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.294: INFO: Pod "webserver-deployment-847dcfb7fb-v6fl6" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-v6fl6 webserver-deployment-847dcfb7fb- deployment-8411 089be3f8-94ee-4a9f-9346-dbfddc18d991 496392 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439850 0xc007439851}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-47pc5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47pc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.294: INFO: Pod "webserver-deployment-847dcfb7fb-vf9nn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vf9nn webserver-deployment-847dcfb7fb- deployment-8411 e716517a-2a0b-49ea-b832-1448093a44b7 496362 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc0074399b0 0xc0074399b1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mngw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mngw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.295: INFO: Pod "webserver-deployment-847dcfb7fb-z6tpb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-z6tpb webserver-deployment-847dcfb7fb- deployment-8411 21153ae6-26c1-4fea-b961-f52ca0dafcb3 496514 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.224" ], "mac": "d2:08:0b:f4:9c:a0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.224" ], "mac": "d2:08:0b:f4:9c:a0", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439b10 0xc007439b11}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-25 10:11:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jf766,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf766,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-05-25 10:11:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.295: INFO: Pod "webserver-deployment-847dcfb7fb-zrdnb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zrdnb webserver-deployment-847dcfb7fb- deployment-8411 36a4268e-12a3-40cb-8a10-58435aa9ae42 496393 0 2021-05-25 10:11:12 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439ce0 0xc007439ce1}] [] [{kube-controller-manager Update v1 2021-05-25 10:11:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-njcpp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-njcpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:11:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:11:14.295: INFO: Pod "webserver-deployment-847dcfb7fb-zz6jq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zz6jq webserver-deployment-847dcfb7fb- deployment-8411 7df85eb8-97ec-4dba-8523-e94cecee5f91 496037 0 2021-05-25 10:10:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.213" ], "mac": "56:f2:b9:49:0e:ee", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.213" ], "mac": "56:f2:b9:49:0e:ee", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5 0xc007439e40 0xc007439e41}] [] [{kube-controller-manager Update v1 2021-05-25 10:10:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dff597bb-b8d0-4a4c-8d2e-bcbf6a2338b5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:10:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:11:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qtrmd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtrmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:10:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.213,StartTime:2021-05-25 10:10:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:10:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://7bd5d37eac0ac1514c817336e523b194470dd539aae683dc1a327b54a4e4cf7a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:14.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8411" for this suite. • [SLOW TEST:16.166 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:09.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 25 10:11:09.206: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496221 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:11:09.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496223 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:11:09.207: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496224 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 25 10:11:19.988: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496602 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:11:19.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496606 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:11:19.988: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1150 e4c04615-fd2e-48fe-86f5-de0d9ea22b7e 496608 0 2021-05-25 10:11:09 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-25 10:11:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:19.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1150" for this suite. • [SLOW TEST:10.840 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":7,"skipped":120,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":254,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:55.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 25 10:10:55.878: INFO: >>> kubeConfig: /root/.kube/config May 25 10:11:03.792: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:22.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3722" for this suite. • [SLOW TEST:26.891 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":16,"skipped":254,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:22.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:22.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8309" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":17,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:20.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c5f12553-33a3-48e3-9130-3c9d99bc0e14 STEP: Creating a pod to test consume configMaps May 25 10:11:20.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab" in namespace "configmap-3661" to be "Succeeded or Failed" May 25 10:11:20.045: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab": Phase="Pending", Reason="", readiness=false. Elapsed: 1.730659ms May 25 10:11:22.049: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005342495s May 25 10:11:24.052: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00847229s May 25 10:11:26.056: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012373119s May 25 10:11:28.184: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139931388s STEP: Saw pod success May 25 10:11:28.184: INFO: Pod "pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab" satisfied condition "Succeeded or Failed" May 25 10:11:28.483: INFO: Trying to get logs from node v1.21-worker pod pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab container configmap-volume-test: STEP: delete the pod May 25 10:11:29.183: INFO: Waiting for pod pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab to disappear May 25 10:11:29.187: INFO: Pod pod-configmaps-b15722e2-531d-4775-aeb0-1e9fe5e584ab no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:29.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3661" for this suite. • [SLOW TEST:9.468 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:03.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-wmhw STEP: Creating a pod to test atomic-volume-subpath May 25 10:11:03.612: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wmhw" in namespace "subpath-9205" to be "Succeeded or Failed" May 25 10:11:03.615: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.885497ms May 25 10:11:05.619: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006766117s May 25 10:11:07.622: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010393727s May 25 10:11:09.627: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 6.014998266s May 25 10:11:11.632: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 8.020154494s May 25 10:11:13.637: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 10.024886194s May 25 10:11:15.643: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 12.031050135s May 25 10:11:17.683: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 14.071171105s May 25 10:11:19.883: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 16.271360025s May 25 10:11:21.887: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 18.274768097s May 25 10:11:23.891: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 20.278862775s May 25 10:11:25.896: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Running", Reason="", readiness=true. Elapsed: 22.283808441s May 25 10:11:28.184: INFO: Pod "pod-subpath-test-downwardapi-wmhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.571719793s STEP: Saw pod success May 25 10:11:28.184: INFO: Pod "pod-subpath-test-downwardapi-wmhw" satisfied condition "Succeeded or Failed" May 25 10:11:28.483: INFO: Trying to get logs from node v1.21-worker2 pod pod-subpath-test-downwardapi-wmhw container test-container-subpath-downwardapi-wmhw: STEP: delete the pod May 25 10:11:29.183: INFO: Waiting for pod pod-subpath-test-downwardapi-wmhw to disappear May 25 10:11:29.188: INFO: Pod pod-subpath-test-downwardapi-wmhw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wmhw May 25 10:11:29.188: INFO: Deleting pod "pod-subpath-test-downwardapi-wmhw" in namespace "subpath-9205" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:29.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9205" for this suite. • [SLOW TEST:26.512 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:22.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint May 25 10:11:22.889: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint May 25 10:11:24.903: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint May 25 10:11:27.678: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:29.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-9610" for this suite. • [SLOW TEST:7.639 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":18,"skipped":287,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:09.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:11:10.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:11:12.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:14.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:16.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:18.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:20.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:22.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:24.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534270, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:11:27.105: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:11:27.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:32.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7821" for this suite. STEP: Destroying namespace "webhook-7821-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.997 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":13,"skipped":238,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":14,"skipped":187,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:14.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 25 10:11:14.346: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:16.351: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:18.782: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:20.378: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:22.349: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:24.352: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:26.350: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:28.483: INFO: The status of Pod labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9 is Running (Ready = true) May 25 10:11:29.578: INFO: Successfully updated pod "labelsupdate104b06d3-6535-44c8-889c-ce87f0c672a9" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:32.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8282" for this suite. • [SLOW TEST:17.893 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":187,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:32.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:34.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-5874" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":14,"skipped":244,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:29.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created May 25 10:11:31.179: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:33.183: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:34.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9228" for this suite. •S ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":9,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:32.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:40.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8197" for this suite. • [SLOW TEST:8.131 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":16,"skipped":193,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:40.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 25 10:11:40.436: INFO: starting watch STEP: patching STEP: updating May 25 10:11:40.447: INFO: waiting for watch events with expected annotations May 25 10:11:40.447: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:40.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-3721" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":17,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:30.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components May 25 10:11:31.387: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 25 10:11:31.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:31.902: INFO: stderr: "" May 25 10:11:31.902: INFO: stdout: "service/agnhost-replica created\n" May 25 10:11:31.902: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 25 10:11:31.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:32.195: INFO: stderr: "" May 25 10:11:32.195: INFO: stdout: "service/agnhost-primary created\n" May 25 10:11:32.196: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 25 10:11:32.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:32.482: INFO: stderr: "" May 25 10:11:32.482: INFO: stdout: "service/frontend created\n" May 25 10:11:32.483: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 25 10:11:32.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:32.770: INFO: stderr: "" May 25 10:11:32.770: INFO: stdout: "deployment.apps/frontend created\n" May 25 10:11:32.770: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 10:11:32.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:33.060: INFO: stderr: "" May 25 10:11:33.060: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 25 10:11:33.060: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 10:11:33.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 create -f -' May 25 10:11:33.427: INFO: stderr: "" May 25 10:11:33.427: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 25 10:11:33.427: INFO: Waiting for all frontend pods to be Running. May 25 10:11:43.480: INFO: Waiting for frontend to serve content. May 25 10:11:43.490: INFO: Trying to add a new entry to the guestbook. May 25 10:11:43.500: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 25 10:11:43.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:43.639: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:43.639: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 25 10:11:43.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:43.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:43.755: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 25 10:11:43.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:43.918: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:43.918: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 10:11:43.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:44.039: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:44.039: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 10:11:44.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:44.154: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:44.154: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 25 10:11:44.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3799 delete --grace-period=0 --force -f -' May 25 10:11:44.275: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:11:44.275: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3799" for this suite. • [SLOW TEST:13.780 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":19,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:34.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-105ed98b-1bb7-43a0-ae7a-78b932e39b64 STEP: Creating a pod to test consume secrets May 25 10:11:34.262: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df" in namespace "projected-3707" to be "Succeeded or Failed" May 25 10:11:34.264: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387751ms May 25 10:11:36.268: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006795879s May 25 10:11:38.272: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010803301s May 25 10:11:40.277: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015558552s May 25 10:11:42.281: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019352378s May 25 10:11:44.285: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022844291s STEP: Saw pod success May 25 10:11:44.285: INFO: Pod "pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df" satisfied condition "Succeeded or Failed" May 25 10:11:44.288: INFO: Trying to get logs from node v1.21-worker pod pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df container projected-secret-volume-test: STEP: delete the pod May 25 10:11:44.302: INFO: Waiting for pod pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df to disappear May 25 10:11:44.305: INFO: Pod pod-projected-secrets-0fef8f24-b3ed-4921-a465-cf78efe985df no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:44.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3707" for this suite. • [SLOW TEST:10.092 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:34.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-ef875c28-f865-433f-81f9-b35cbf68bedd STEP: Creating a pod to test consume secrets May 25 10:11:34.301: INFO: Waiting up to 5m0s for pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523" in namespace "secrets-1515" to be "Succeeded or Failed" May 25 10:11:34.304: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931538ms May 25 10:11:36.308: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112711s May 25 10:11:38.313: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012248491s May 25 10:11:40.317: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016897303s May 25 10:11:42.322: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021213958s May 25 10:11:44.325: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02437844s STEP: Saw pod success May 25 10:11:44.325: INFO: Pod "pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523" satisfied condition "Succeeded or Failed" May 25 10:11:44.328: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523 container secret-volume-test: STEP: delete the pod May 25 10:11:44.340: INFO: Waiting for pod pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523 to disappear May 25 10:11:44.343: INFO: Pod pod-secrets-4ce78464-cabe-4060-a0f6-c00f21236523 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1515" for this suite. • [SLOW TEST:10.090 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:30.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 25 10:11:31.384: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:33.389: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 25 10:11:33.400: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:35.405: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:37.404: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 10:11:37.419: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:37.422: INFO: Pod pod-with-poststart-http-hook still exists May 25 10:11:39.423: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:39.427: INFO: Pod pod-with-poststart-http-hook still exists May 25 10:11:41.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:41.426: INFO: Pod pod-with-poststart-http-hook still exists May 25 10:11:43.423: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:43.427: INFO: Pod pod-with-poststart-http-hook still exists May 25 10:11:45.423: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:45.427: INFO: Pod pod-with-poststart-http-hook still exists May 25 10:11:47.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 10:11:47.426: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:47.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2915" for this suite. • [SLOW TEST:17.313 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":298,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:44.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:48.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2622" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":20,"skipped":341,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":17,"skipped":279,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:59.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5027 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5027 STEP: Creating statefulset with conflicting port in namespace statefulset-5027 STEP: Waiting until pod test-pod will start running in namespace statefulset-5027 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5027 May 25 10:11:07.801: INFO: Observed stateful pod in namespace: statefulset-5027, name: ss-0, uid: 6e6d20b0-0956-4d09-8766-5711d0043f33, status phase: Pending. Waiting for statefulset controller to delete. May 25 10:11:09.620: INFO: Observed stateful pod in namespace: statefulset-5027, name: ss-0, uid: 6e6d20b0-0956-4d09-8766-5711d0043f33, status phase: Failed. Waiting for statefulset controller to delete. May 25 10:11:09.628: INFO: Observed stateful pod in namespace: statefulset-5027, name: ss-0, uid: 6e6d20b0-0956-4d09-8766-5711d0043f33, status phase: Failed. Waiting for statefulset controller to delete. May 25 10:11:09.631: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5027 STEP: Removing pod with conflicting port in namespace statefulset-5027 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5027 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:11:30.187: INFO: Deleting all statefulset in ns statefulset-5027 May 25 10:11:30.483: INFO: Scaling statefulset ss to 0 May 25 10:11:50.689: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:11:50.692: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:50.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5027" for this suite. • [SLOW TEST:50.971 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":18,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:44.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 25 10:11:44.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4587 create -f -' May 25 10:11:44.687: INFO: stderr: "" May 25 10:11:44.687: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 25 10:11:45.694: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:45.694: INFO: Found 0 / 1 May 25 10:11:46.691: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:46.691: INFO: Found 0 / 1 May 25 10:11:47.691: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:47.691: INFO: Found 0 / 1 May 25 10:11:48.692: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:48.692: INFO: Found 0 / 1 May 25 10:11:49.691: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:49.691: INFO: Found 0 / 1 May 25 10:11:50.691: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:50.691: INFO: Found 1 / 1 May 25 10:11:50.691: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 25 10:11:50.694: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:50.694: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 10:11:50.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4587 patch pod agnhost-primary-8w7hg -p {"metadata":{"annotations":{"x":"y"}}}' May 25 10:11:50.823: INFO: stderr: "" May 25 10:11:50.823: INFO: stdout: "pod/agnhost-primary-8w7hg patched\n" STEP: checking annotations May 25 10:11:50.826: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:11:50.826: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:50.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4587" for this suite. • [SLOW TEST:6.445 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":16,"skipped":294,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:40.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:11:41.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:11:43.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:45.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:47.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:11:49.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534301, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:11:52.096: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:53.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1108" for this suite. STEP: Destroying namespace "webhook-1108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.968 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":18,"skipped":230,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:50.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 10:11:55.191: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:55.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1943" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":299,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:44.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:11:55.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9857" for this suite. • [SLOW TEST:11.173 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:54.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-0b98170a-2a25-4975-9ab4-790c4b7e4edb STEP: Creating a pod to test consume secrets May 25 10:11:54.548: INFO: Waiting up to 5m0s for pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33" in namespace "secrets-4747" to be "Succeeded or Failed" May 25 10:11:54.550: INFO: Pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009967ms May 25 10:11:56.556: INFO: Pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007316703s May 25 10:11:58.560: INFO: Pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011465364s May 25 10:12:00.565: INFO: Pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016076649s STEP: Saw pod success May 25 10:12:00.565: INFO: Pod "pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33" satisfied condition "Succeeded or Failed" May 25 10:12:00.568: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33 container secret-volume-test: STEP: delete the pod May 25 10:12:00.582: INFO: Waiting for pod pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33 to disappear May 25 10:12:00.585: INFO: Pod pod-secrets-aebd6310-3ef6-4001-96df-87ba241bac33 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:00.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4747" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":241,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:55.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-af8f87f6-5774-4169-8a66-bb0a34af78c4 STEP: Creating a pod to test consume configMaps May 25 10:11:55.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34" in namespace "projected-6235" to be "Succeeded or Failed" May 25 10:11:55.267: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771266ms May 25 10:11:57.272: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007304725s May 25 10:11:59.276: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01154793s May 25 10:12:01.281: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016227424s May 25 10:12:03.286: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021033106s STEP: Saw pod success May 25 10:12:03.286: INFO: Pod "pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34" satisfied condition "Succeeded or Failed" May 25 10:12:03.289: INFO: Trying to get logs from node v1.21-worker pod pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34 container agnhost-container: STEP: delete the pod May 25 10:12:03.483: INFO: Waiting for pod pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34 to disappear May 25 10:12:03.578: INFO: Pod pod-projected-configmaps-4341ed1e-7f75-426b-b89e-aae33098dd34 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:03.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6235" for this suite. • [SLOW TEST:8.559 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":11,"skipped":161,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:55.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:11:55.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616" in namespace "downward-api-6607" to be "Succeeded or Failed" May 25 10:11:55.539: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.750312ms May 25 10:11:57.544: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00765488s May 25 10:11:59.548: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012096509s May 25 10:12:01.552: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616": Phase="Running", Reason="", readiness=true. Elapsed: 6.016027095s May 25 10:12:03.578: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041653579s STEP: Saw pod success May 25 10:12:03.578: INFO: Pod "downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616" satisfied condition "Succeeded or Failed" May 25 10:12:03.581: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616 container client-container: STEP: delete the pod May 25 10:12:04.289: INFO: Waiting for pod downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616 to disappear May 25 10:12:04.292: INFO: Pod downwardapi-volume-d78b0046-add4-462d-8c27-a4eafbed3616 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:04.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6607" for this suite. • [SLOW TEST:9.078 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":161,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:08.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3459 May 25 10:11:08.165: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:10.170: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:12.169: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 25 10:11:12.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 25 10:11:12.371: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 25 10:11:12.371: INFO: stdout: "iptables" May 25 10:11:12.371: INFO: proxyMode: iptables May 25 10:11:12.379: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 25 10:11:12.381: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3459 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3459 I0525 10:11:12.396618 23 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3459, replica count: 3 I0525 10:11:15.448276 23 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:11:18.448871 23 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:11:21.450347 23 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:11:24.450545 23 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:11:27.451737 23 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:11:28.788: INFO: Creating new exec pod May 25 10:11:33.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' May 25 10:11:34.136: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 25 10:11:34.136: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:11:34.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.249.151 80' May 25 10:11:34.341: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.249.151 80\nConnection to 10.96.249.151 80 port [tcp/http] succeeded!\n" May 25 10:11:34.341: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:11:34.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30052' May 25 10:11:34.542: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30052\nConnection to 172.18.0.4 30052 port [tcp/*] succeeded!\n" May 25 10:11:34.542: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:11:34.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.2 30052' May 25 10:11:34.744: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.2 30052\nConnection to 172.18.0.2 30052 port [tcp/*] succeeded!\n" May 25 10:11:34.744: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:11:34.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30052/ ; done' May 25 10:11:35.088: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n" May 25 10:11:35.088: INFO: stdout: "\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg\naffinity-nodeport-timeout-57mgg" May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Received response from host: affinity-nodeport-timeout-57mgg May 25 10:11:35.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:30052/' May 25 10:11:35.294: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n" May 25 10:11:35.294: INFO: stdout: "affinity-nodeport-timeout-57mgg" May 25 10:11:55.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-3459 exec execpod-affinity4splz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:30052/' May 25 10:11:55.538: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:30052/\n" May 25 10:11:55.538: INFO: stdout: "affinity-nodeport-timeout-nld64" May 25 10:11:55.538: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3459, will wait for the garbage collector to delete the pods May 25 10:11:55.602: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.748202ms May 25 10:11:55.703: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.096111ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:05.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3459" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:57.565 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":274,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:00.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cfb1d154-7d09-4198-9c93-6873e16fa529 STEP: Creating a pod to test consume secrets May 25 10:12:00.646: INFO: Waiting up to 5m0s for pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474" in namespace "secrets-622" to be "Succeeded or Failed" May 25 10:12:00.648: INFO: Pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469568ms May 25 10:12:02.780: INFO: Pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134685315s May 25 10:12:04.888: INFO: Pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242312007s May 25 10:12:06.901: INFO: Pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255477615s STEP: Saw pod success May 25 10:12:06.901: INFO: Pod "pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474" satisfied condition "Succeeded or Failed" May 25 10:12:06.904: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474 container secret-env-test: STEP: delete the pod May 25 10:12:06.920: INFO: Waiting for pod pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474 to disappear May 25 10:12:06.922: INFO: Pod pod-secrets-d5de26c6-7de8-4d4d-ad02-cbe1e37e4474 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:06.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-622" for this suite. • [SLOW TEST:6.322 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:06.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:07.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7548" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":21,"skipped":264,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:05.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7994.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7994.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7994.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7994.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7994.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7994.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:12:09.843: INFO: DNS probes using dns-7994/dns-test-adba64c2-99c0-4e66-a399-e00bd9d2f103 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:09.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7994" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:09.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods May 25 10:12:09.925: INFO: created test-pod-1 May 25 10:12:09.928: INFO: created test-pod-2 May 25 10:12:09.933: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:09.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-544" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":11,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:48.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 25 10:11:48.944: INFO: >>> kubeConfig: /root/.kube/config May 25 10:11:53.482: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:10.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9660" for this suite. • [SLOW TEST:21.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":21,"skipped":353,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:50.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6067 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-6067 May 25 10:11:50.903: INFO: Found 0 stateful pods, waiting for 1 May 25 10:12:00.908: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:12:00.930: INFO: Deleting all statefulset in ns statefulset-6067 May 25 10:12:00.933: INFO: Scaling statefulset ss to 0 May 25 10:12:10.952: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:12:10.955: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:10.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6067" for this suite. • [SLOW TEST:20.109 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":17,"skipped":308,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:07.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:15.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7345" for this suite. • [SLOW TEST:8.063 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":22,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:10.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-fcca17d3-076c-4fe4-8a3a-075082248372 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:17.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7400" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:11:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3430 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 10:11:47.487: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 10:11:47.510: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:49.514: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:51.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:11:53.683: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:11:55.514: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:11:57.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:11:59.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:01.515: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:03.578: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:05.579: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:07.517: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:09.514: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 10:12:09.521: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 10:12:17.539: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 25 10:12:17.539: INFO: Breadth first check of 10.244.1.74 on host 172.18.0.4... May 25 10:12:17.542: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.84:9080/dial?request=hostname&protocol=http&host=10.244.1.74&port=8080&tries=1'] Namespace:pod-network-test-3430 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:17.542: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:17.673: INFO: Waiting for responses: map[] May 25 10:12:17.674: INFO: reached 10.244.1.74 after 0/1 tries May 25 10:12:17.674: INFO: Breadth first check of 10.244.2.233 on host 172.18.0.2... May 25 10:12:17.677: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.84:9080/dial?request=hostname&protocol=http&host=10.244.2.233&port=8080&tries=1'] Namespace:pod-network-test-3430 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:17.677: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:17.794: INFO: Waiting for responses: map[] May 25 10:12:17.794: INFO: reached 10.244.2.233 after 0/1 tries May 25 10:12:17.794: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:17.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3430" for this suite. • [SLOW TEST:30.353 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":304,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:10.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c708c254-5148-41bb-a549-8747d6b70b35 STEP: Creating a pod to test consume secrets May 25 10:12:10.050: INFO: Waiting up to 5m0s for pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0" in namespace "secrets-4253" to be "Succeeded or Failed" May 25 10:12:10.053: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8524ms May 25 10:12:12.057: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006755236s May 25 10:12:14.061: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010570634s May 25 10:12:16.065: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015159885s May 25 10:12:18.070: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019900598s STEP: Saw pod success May 25 10:12:18.070: INFO: Pod "pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0" satisfied condition "Succeeded or Failed" May 25 10:12:18.074: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0 container secret-volume-test: STEP: delete the pod May 25 10:12:18.090: INFO: Waiting for pod pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0 to disappear May 25 10:12:18.093: INFO: Pod pod-secrets-0a9f2e82-437d-430e-bb84-8cf00c62aac0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:18.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4253" for this suite. • [SLOW TEST:8.091 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":327,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:15.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command May 25 10:12:15.162: INFO: Waiting up to 5m0s for pod "client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2" in namespace "containers-1580" to be "Succeeded or Failed" May 25 10:12:15.165: INFO: Pod "client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99926ms May 25 10:12:17.169: INFO: Pod "client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007165942s May 25 10:12:19.174: INFO: Pod "client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01206773s STEP: Saw pod success May 25 10:12:19.174: INFO: Pod "client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2" satisfied condition "Succeeded or Failed" May 25 10:12:19.178: INFO: Trying to get logs from node v1.21-worker pod client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2 container agnhost-container: STEP: delete the pod May 25 10:12:19.192: INFO: Waiting for pod client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2 to disappear May 25 10:12:19.195: INFO: Pod client-containers-d3308e3a-1eb4-4559-b408-31ff0e48c7f2 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:19.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1580" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":290,"failed":0} S ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:17.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3402/configmap-test-4313280a-a18a-4361-9589-c12a8ba4a949 STEP: Creating a pod to test consume configMaps May 25 10:12:17.185: INFO: Waiting up to 5m0s for pod "pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75" in namespace "configmap-3402" to be "Succeeded or Failed" May 25 10:12:17.189: INFO: Pod "pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243421ms May 25 10:12:19.192: INFO: Pod "pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006926583s STEP: Saw pod success May 25 10:12:19.192: INFO: Pod "pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75" satisfied condition "Succeeded or Failed" May 25 10:12:19.196: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75 container env-test: STEP: delete the pod May 25 10:12:19.209: INFO: Waiting for pod pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75 to disappear May 25 10:12:19.211: INFO: Pod pod-configmaps-81b5be54-b53a-49c8-b650-7c6a7e588e75 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:19.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3402" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:03.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8422" for this suite. • [SLOW TEST:16.936 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":21,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:20.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions May 25 10:12:20.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9914 api-versions' May 25 10:12:20.990: INFO: stderr: "" May 25 10:12:20.990: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-multi-ver.example.com/v2\ncrd-publish-openapi-test-multi-ver.example.com/v3\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nk8s.cni.cncf.io/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nprojectcontour.io/v1\nprojectcontour.io/v1alpha1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:20.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9914" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":22,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:17.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 10:12:17.871: INFO: Waiting up to 5m0s for pod "pod-da395fd5-f9f3-49a6-aee7-73b0104ac353" in namespace "emptydir-1090" to be "Succeeded or Failed" May 25 10:12:17.874: INFO: Pod "pod-da395fd5-f9f3-49a6-aee7-73b0104ac353": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040404ms May 25 10:12:19.879: INFO: Pod "pod-da395fd5-f9f3-49a6-aee7-73b0104ac353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007660842s May 25 10:12:21.884: INFO: Pod "pod-da395fd5-f9f3-49a6-aee7-73b0104ac353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012782833s STEP: Saw pod success May 25 10:12:21.884: INFO: Pod "pod-da395fd5-f9f3-49a6-aee7-73b0104ac353" satisfied condition "Succeeded or Failed" May 25 10:12:21.887: INFO: Trying to get logs from node v1.21-worker pod pod-da395fd5-f9f3-49a6-aee7-73b0104ac353 container test-container: STEP: delete the pod May 25 10:12:21.901: INFO: Waiting for pod pod-da395fd5-f9f3-49a6-aee7-73b0104ac353 to disappear May 25 10:12:21.904: INFO: Pod pod-da395fd5-f9f3-49a6-aee7-73b0104ac353 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:21.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1090" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":315,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:18.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 25 10:12:18.171: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-3293 16b57040-390e-436d-ac51-d453fedd3194 498578 0 2021-05-25 10:12:18 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-05-25 10:12:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-crl5z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crl5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:12:18.175: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:20.179: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:22.179: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:24.180: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 25 10:12:24.180: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3293 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:24.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 25 10:12:24.318: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3293 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:24.318: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:24.458: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:24.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3293" for this suite. • [SLOW TEST:6.342 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":13,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:21.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-7bbf5cd0-932b-44ae-984e-ec0d7cafb51e STEP: Creating a pod to test consume secrets May 25 10:12:21.970: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe" in namespace "projected-1016" to be "Succeeded or Failed" May 25 10:12:21.973: INFO: Pod "pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657508ms May 25 10:12:23.976: INFO: Pod "pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005921453s May 25 10:12:25.980: INFO: Pod "pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010061063s STEP: Saw pod success May 25 10:12:25.980: INFO: Pod "pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe" satisfied condition "Succeeded or Failed" May 25 10:12:25.983: INFO: Trying to get logs from node v1.21-worker pod pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe container projected-secret-volume-test: STEP: delete the pod May 25 10:12:25.996: INFO: Waiting for pod pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe to disappear May 25 10:12:25.999: INFO: Pod pod-projected-secrets-34912549-77a6-40ec-9525-daf72b3bcefe no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:25.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1016" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:21.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:12:21.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782" in namespace "downward-api-3172" to be "Succeeded or Failed" May 25 10:12:21.097: INFO: Pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57153ms May 25 10:12:23.101: INFO: Pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006305878s May 25 10:12:25.106: INFO: Pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010886873s May 25 10:12:27.110: INFO: Pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015708917s STEP: Saw pod success May 25 10:12:27.111: INFO: Pod "downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782" satisfied condition "Succeeded or Failed" May 25 10:12:27.114: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782 container client-container: STEP: delete the pod May 25 10:12:27.127: INFO: Waiting for pod downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782 to disappear May 25 10:12:27.130: INFO: Pod downwardapi-volume-ca8d26f6-f995-4ee3-9f6e-a49c46e20782 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:27.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3172" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:24.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:24.584: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"eb6a06f4-1497-4c5a-b571-654eb418a69e", Controller:(*bool)(0xc00230a012), BlockOwnerDeletion:(*bool)(0xc00230a013)}} May 25 10:12:24.589: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"518ab928-0f43-4786-ab1d-b12991a55cda", Controller:(*bool)(0xc004bd826a), BlockOwnerDeletion:(*bool)(0xc004bd826b)}} May 25 10:12:24.594: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7003c4e7-e37e-40f1-a8dd-0b5b9e069589", Controller:(*bool)(0xc004c0ceda), BlockOwnerDeletion:(*bool)(0xc004c0cedb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:29.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7566" for this suite. • [SLOW TEST:5.077 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":14,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:27.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:27.243: INFO: Creating deployment "test-recreate-deployment" May 25 10:12:27.248: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 25 10:12:27.254: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 25 10:12:29.262: INFO: Waiting deployment "test-recreate-deployment" to complete May 25 10:12:29.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:12:31.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534347, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:12:33.269: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 25 10:12:33.278: INFO: Updating deployment test-recreate-deployment May 25 10:12:33.278: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:12:33.327: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6186 39993b48-56a6-4609-8ab7-7546e44e45d0 499039 2 2021-05-25 10:12:27 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-25 10:12:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:12:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0010aed28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-05-25 10:12:33 +0000 UTC,LastTransitionTime:2021-05-25 10:12:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-05-25 10:12:33 +0000 UTC,LastTransitionTime:2021-05-25 10:12:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 25 10:12:33.332: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-6186 93c607e7-f23d-463a-992a-237f2c9c09d6 499037 1 2021-05-25 10:12:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 39993b48-56a6-4609-8ab7-7546e44e45d0 0xc0010af5c0 0xc0010af5c1}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:12:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39993b48-56a6-4609-8ab7-7546e44e45d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0010af658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:12:33.332: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 25 10:12:33.332: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-6186 acf8bb09-5179-4d8c-899f-efc3da50a5ed 499028 2 2021-05-25 10:12:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 39993b48-56a6-4609-8ab7-7546e44e45d0 0xc0010af3c7 0xc0010af3c8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:12:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39993b48-56a6-4609-8ab7-7546e44e45d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0010af458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:12:33.335: INFO: Pod "test-recreate-deployment-85d47dcb4-fp6jc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-fp6jc test-recreate-deployment-85d47dcb4- deployment-6186 1f603fb6-d0a0-492c-9371-dca14ce70b5d 499035 0 2021-05-25 10:12:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 93c607e7-f23d-463a-992a-237f2c9c09d6 0xc0047dc010 0xc0047dc011}] [] [{kube-controller-manager Update v1 2021-05-25 10:12:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93c607e7-f23d-463a-992a-237f2c9c09d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-49pz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-49pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:12:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:33.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6186" for this suite. • [SLOW TEST:6.128 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":24,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:33.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:33.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8674" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":25,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:29.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-fc11f96e-8347-4de9-8579-f4b26107fe66 STEP: Creating a pod to test consume secrets May 25 10:12:29.733: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b" in namespace "projected-8938" to be "Succeeded or Failed" May 25 10:12:29.736: INFO: Pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564708ms May 25 10:12:31.740: INFO: Pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006449533s May 25 10:12:33.745: INFO: Pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011530293s May 25 10:12:35.750: INFO: Pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016293338s STEP: Saw pod success May 25 10:12:35.750: INFO: Pod "pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b" satisfied condition "Succeeded or Failed" May 25 10:12:35.753: INFO: Trying to get logs from node v1.21-worker pod pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b container secret-volume-test: STEP: delete the pod May 25 10:12:35.767: INFO: Waiting for pod pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b to disappear May 25 10:12:35.770: INFO: Pod pod-projected-secrets-4f5a558d-0a59-490c-b9ce-4e682230db5b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:35.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8938" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":422,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:26.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:38.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6666" for this suite. • [SLOW TEST:12.047 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":21,"skipped":379,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:35.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready May 25 10:12:35.840: INFO: observed Pod pod-test in namespace pods-4983 in phase Pending with labels: map[test-pod-static:true] & conditions [] May 25 10:12:35.842: INFO: observed Pod pod-test in namespace pods-4983 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC }] May 25 10:12:35.856: INFO: observed Pod pod-test in namespace pods-4983 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC }] May 25 10:12:36.340: INFO: observed Pod pod-test in namespace pods-4983 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC }] May 25 10:12:38.409: INFO: Found Pod pod-test in namespace pods-4983 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-25 10:12:35 +0000 UTC }] STEP: patching the Pod with a new Label and updated data May 25 10:12:38.422: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted May 25 10:12:38.443: INFO: observed event type ADDED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED May 25 10:12:38.443: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:38.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4983" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":16,"skipped":430,"failed":0} SSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:33.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container May 25 10:12:40.024: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1107 pod-service-account-1e34640e-519a-4fec-a18e-483d8689d0ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 25 10:12:40.255: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1107 pod-service-account-1e34640e-519a-4fec-a18e-483d8689d0ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 25 10:12:40.498: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1107 pod-service-account-1e34640e-519a-4fec-a18e-483d8689d0ec -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:40.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1107" for this suite. • [SLOW TEST:7.307 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":26,"skipped":525,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:10.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3623 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 10:12:10.804: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 10:12:10.822: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:12.826: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:14.827: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:16.827: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:18.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:20.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:22.826: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:24.849: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:26.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:28.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:30.830: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:32.826: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:34.827: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:12:36.827: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 10:12:36.832: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 10:12:42.887: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 25 10:12:42.887: INFO: Breadth first check of 10.244.1.86 on host 172.18.0.4... May 25 10:12:42.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.101:9080/dial?request=hostname&protocol=udp&host=10.244.1.86&port=8081&tries=1'] Namespace:pod-network-test-3623 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:42.890: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:43.002: INFO: Waiting for responses: map[] May 25 10:12:43.002: INFO: reached 10.244.1.86 after 0/1 tries May 25 10:12:43.002: INFO: Breadth first check of 10.244.2.234 on host 172.18.0.2... May 25 10:12:43.005: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.101:9080/dial?request=hostname&protocol=udp&host=10.244.2.234&port=8081&tries=1'] Namespace:pod-network-test-3623 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:12:43.005: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:43.131: INFO: Waiting for responses: map[] May 25 10:12:43.131: INFO: reached 10.244.2.234 after 0/1 tries May 25 10:12:43.131: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:43.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3623" for this suite. • [SLOW TEST:32.360 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":358,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:40.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:40.829: INFO: Waiting up to 5m0s for pod "busybox-user-65534-87f8e21c-7de2-478c-af21-1b4b76fda7ed" in namespace "security-context-test-6401" to be "Succeeded or Failed" May 25 10:12:40.831: INFO: Pod "busybox-user-65534-87f8e21c-7de2-478c-af21-1b4b76fda7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267579ms May 25 10:12:42.835: INFO: Pod "busybox-user-65534-87f8e21c-7de2-478c-af21-1b4b76fda7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006020973s May 25 10:12:44.840: INFO: Pod "busybox-user-65534-87f8e21c-7de2-478c-af21-1b4b76fda7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010660006s May 25 10:12:44.840: INFO: Pod "busybox-user-65534-87f8e21c-7de2-478c-af21-1b4b76fda7ed" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:44.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6401" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":533,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:43.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:45.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3398" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":372,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:45.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:45.266: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7424" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":24,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:45.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy May 25 10:12:45.890: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-801 proxy --unix-socket=/tmp/kubectl-proxy-unix952331658/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:45.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-801" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":25,"skipped":411,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:45.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-fb5ae448-b3b1-4645-b6ab-c13f68efbd78 STEP: Creating a pod to test consume configMaps May 25 10:12:46.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939" in namespace "configmap-4756" to be "Succeeded or Failed" May 25 10:12:46.023: INFO: Pod "pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711171ms May 25 10:12:48.027: INFO: Pod "pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006563369s May 25 10:12:50.030: INFO: Pod "pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010282577s STEP: Saw pod success May 25 10:12:50.030: INFO: Pod "pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939" satisfied condition "Succeeded or Failed" May 25 10:12:50.033: INFO: Trying to get logs from node v1.21-worker pod pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939 container agnhost-container: STEP: delete the pod May 25 10:12:50.045: INFO: Waiting for pod pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939 to disappear May 25 10:12:50.048: INFO: Pod pod-configmaps-942a124e-f49b-4e2f-8e94-65e028db2939 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:50.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4756" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:38.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9023 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9023 I0525 10:12:38.499577 23 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9023, replica count: 2 I0525 10:12:41.551980 23 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:12:44.552334 23 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:12:44.552: INFO: Creating new exec pod May 25 10:12:49.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpodggs6n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 25 10:12:49.835: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 25 10:12:49.835: INFO: stdout: "externalname-service-wmsbg" May 25 10:12:49.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpodggs6n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.57.61 80' May 25 10:12:50.065: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.57.61 80\nConnection to 10.96.57.61 80 port [tcp/http] succeeded!\n" May 25 10:12:50.065: INFO: stdout: "externalname-service-wmsbg" May 25 10:12:50.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpodggs6n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32329' May 25 10:12:50.308: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32329\nConnection to 172.18.0.4 32329 port [tcp/*] succeeded!\n" May 25 10:12:50.308: INFO: stdout: "externalname-service-wmsbg" May 25 10:12:50.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-9023 exec execpodggs6n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.2 32329' May 25 10:12:50.509: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.2 32329\nConnection to 172.18.0.2 32329 port [tcp/*] succeeded!\n" May 25 10:12:50.509: INFO: stdout: "externalname-service-wmsbg" May 25 10:12:50.509: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:50.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9023" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:12.069 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":17,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:44.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:48.966: INFO: Deleting pod "var-expansion-0542579e-cac7-4e28-90c0-34b77cce9969" in namespace "var-expansion-5806" May 25 10:12:48.972: INFO: Wait up to 5m0s for pod "var-expansion-0542579e-cac7-4e28-90c0-34b77cce9969" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:52.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5806" for this suite. • [SLOW TEST:8.114 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":28,"skipped":545,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:50.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:50.127: INFO: Creating pod... May 25 10:12:50.136: INFO: Pod Quantity: 1 Status: Pending May 25 10:12:51.139: INFO: Pod Quantity: 1 Status: Pending May 25 10:12:52.141: INFO: Pod Quantity: 1 Status: Pending May 25 10:12:53.180: INFO: Pod Quantity: 1 Status: Pending May 25 10:12:54.141: INFO: Pod Status: Running May 25 10:12:54.141: INFO: Creating service... May 25 10:12:54.179: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/DELETE May 25 10:12:54.183: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 25 10:12:54.183: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/GET May 25 10:12:54.186: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 25 10:12:54.187: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/HEAD May 25 10:12:54.190: INFO: http.Client request:HEAD | StatusCode:200 May 25 10:12:54.190: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/OPTIONS May 25 10:12:54.193: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 25 10:12:54.193: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/PATCH May 25 10:12:54.196: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 25 10:12:54.196: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/POST May 25 10:12:54.198: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 25 10:12:54.199: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/pods/agnhost/proxy/some/path/with/PUT May 25 10:12:54.201: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT May 25 10:12:54.202: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/DELETE May 25 10:12:54.205: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 25 10:12:54.205: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/GET May 25 10:12:54.209: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 25 10:12:54.209: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/HEAD May 25 10:12:54.212: INFO: http.Client request:HEAD | StatusCode:200 May 25 10:12:54.212: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/OPTIONS May 25 10:12:54.221: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 25 10:12:54.222: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/PATCH May 25 10:12:54.237: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 25 10:12:54.238: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/POST May 25 10:12:54.244: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 25 10:12:54.244: INFO: Starting http.Client for https://172.30.13.90:33295/api/v1/namespaces/proxy-5261/services/test-service/proxy/some/path/with/PUT May 25 10:12:54.248: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:54.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5261" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":27,"skipped":440,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:50.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 25 10:12:50.593: INFO: The status of Pod pod-update-867bbbc7-3809-48ab-bbb1-2199d7531662 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:52.596: INFO: The status of Pod pod-update-867bbbc7-3809-48ab-bbb1-2199d7531662 is Pending, waiting for it to be Running (with Ready = true) May 25 10:12:54.597: INFO: The status of Pod pod-update-867bbbc7-3809-48ab-bbb1-2199d7531662 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 10:12:55.284: INFO: Successfully updated pod "pod-update-867bbbc7-3809-48ab-bbb1-2199d7531662" STEP: verifying the updated pod is in kubernetes May 25 10:12:55.291: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:55.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-686" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":459,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:55.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:55.333: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7f3335c3-80f0-4079-8642-96d7e35e7134" in namespace "security-context-test-9773" to be "Succeeded or Failed" May 25 10:12:55.335: INFO: Pod "alpine-nnp-false-7f3335c3-80f0-4079-8642-96d7e35e7134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249928ms May 25 10:12:57.683: INFO: Pod "alpine-nnp-false-7f3335c3-80f0-4079-8642-96d7e35e7134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.350165246s May 25 10:12:57.683: INFO: Pod "alpine-nnp-false-7f3335c3-80f0-4079-8642-96d7e35e7134" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9773" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":459,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:54.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 25 10:12:54.290: INFO: Waiting up to 5m0s for pod "downward-api-f4de9807-392b-46d7-aa35-0b9246211192" in namespace "downward-api-4713" to be "Succeeded or Failed" May 25 10:12:54.293: INFO: Pod "downward-api-f4de9807-392b-46d7-aa35-0b9246211192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.610746ms May 25 10:12:56.298: INFO: Pod "downward-api-f4de9807-392b-46d7-aa35-0b9246211192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007457133s May 25 10:12:58.301: INFO: Pod "downward-api-f4de9807-392b-46d7-aa35-0b9246211192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010669971s STEP: Saw pod success May 25 10:12:58.301: INFO: Pod "downward-api-f4de9807-392b-46d7-aa35-0b9246211192" satisfied condition "Succeeded or Failed" May 25 10:12:58.303: INFO: Trying to get logs from node v1.21-worker2 pod downward-api-f4de9807-392b-46d7-aa35-0b9246211192 container dapi-container: STEP: delete the pod May 25 10:12:58.314: INFO: Waiting for pod downward-api-f4de9807-392b-46d7-aa35-0b9246211192 to disappear May 25 10:12:58.316: INFO: Pod downward-api-f4de9807-392b-46d7-aa35-0b9246211192 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:58.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4713" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":442,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:19.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 25 10:12:19.351: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 25 10:12:35.656: INFO: >>> kubeConfig: /root/.kube/config May 25 10:12:39.649: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:12:59.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-669" for this suite. • [SLOW TEST:40.086 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":20,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:59.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:12:59.505: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:00.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2900" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":21,"skipped":448,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:53.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 25 10:12:53.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 create -f -' May 25 10:12:54.195: INFO: stderr: "" May 25 10:12:54.195: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 10:12:54.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:12:54.339: INFO: stderr: "" May 25 10:12:54.339: INFO: stdout: "update-demo-nautilus-b47h5 update-demo-nautilus-hbq7x " May 25 10:12:54.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods update-demo-nautilus-b47h5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:12:55.301: INFO: stderr: "" May 25 10:12:55.301: INFO: stdout: "" May 25 10:12:55.301: INFO: update-demo-nautilus-b47h5 is created but not running May 25 10:13:00.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:00.439: INFO: stderr: "" May 25 10:13:00.439: INFO: stdout: "update-demo-nautilus-b47h5 update-demo-nautilus-hbq7x " May 25 10:13:00.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods update-demo-nautilus-b47h5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:00.573: INFO: stderr: "" May 25 10:13:00.573: INFO: stdout: "true" May 25 10:13:00.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods update-demo-nautilus-b47h5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:00.707: INFO: stderr: "" May 25 10:13:00.707: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:00.707: INFO: validating pod update-demo-nautilus-b47h5 May 25 10:13:00.718: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:00.718: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:00.718: INFO: update-demo-nautilus-b47h5 is verified up and running May 25 10:13:00.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods update-demo-nautilus-hbq7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:00.854: INFO: stderr: "" May 25 10:13:00.854: INFO: stdout: "true" May 25 10:13:00.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods update-demo-nautilus-hbq7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:00.986: INFO: stderr: "" May 25 10:13:00.986: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:00.986: INFO: validating pod update-demo-nautilus-hbq7x May 25 10:13:00.991: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:00.991: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:00.991: INFO: update-demo-nautilus-hbq7x is verified up and running STEP: using delete to clean up resources May 25 10:13:00.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 delete --grace-period=0 --force -f -' May 25 10:13:01.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:13:01.107: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 10:13:01.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get rc,svc -l name=update-demo --no-headers' May 25 10:13:01.252: INFO: stderr: "No resources found in kubectl-2393 namespace.\n" May 25 10:13:01.252: INFO: stdout: "" May 25 10:13:01.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2393 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 10:13:01.395: INFO: stderr: "" May 25 10:13:01.396: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:01.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2393" for this suite. • [SLOW TEST:8.395 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":29,"skipped":555,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:58.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod May 25 10:12:58.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 create -f -' May 25 10:12:58.679: INFO: stderr: "" May 25 10:12:58.679: INFO: stdout: "pod/pause created\n" May 25 10:12:58.679: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 25 10:12:58.679: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6961" to be "running and ready" May 25 10:12:58.682: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.696754ms May 25 10:13:00.685: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006117788s May 25 10:13:02.689: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009943905s May 25 10:13:04.693: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.014136139s May 25 10:13:04.693: INFO: Pod "pause" satisfied condition "running and ready" May 25 10:13:04.693: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod May 25 10:13:04.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 label pods pause testing-label=testing-label-value' May 25 10:13:04.843: INFO: stderr: "" May 25 10:13:04.843: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 25 10:13:04.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 get pod pause -L testing-label' May 25 10:13:04.980: INFO: stderr: "" May 25 10:13:04.980: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod May 25 10:13:04.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 label pods pause testing-label-' May 25 10:13:05.126: INFO: stderr: "" May 25 10:13:05.126: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 25 10:13:05.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 get pod pause -L testing-label' May 25 10:13:05.257: INFO: stderr: "" May 25 10:13:05.257: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources May 25 10:13:05.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 delete --grace-period=0 --force -f -' May 25 10:13:05.386: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:13:05.387: INFO: stdout: "pod \"pause\" force deleted\n" May 25 10:13:05.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 get rc,svc -l name=pause --no-headers' May 25 10:13:05.537: INFO: stderr: "No resources found in kubectl-6961 namespace.\n" May 25 10:13:05.537: INFO: stdout: "" May 25 10:13:05.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6961 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 10:13:05.674: INFO: stderr: "" May 25 10:13:05.674: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:05.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6961" for this suite. • [SLOW TEST:7.589 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":20,"skipped":463,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:19.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1429, will wait for the garbage collector to delete the pods May 25 10:12:25.310: INFO: Deleting Job.batch foo took: 5.129521ms May 25 10:12:25.411: INFO: Terminating Job.batch foo pods took: 100.426195ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1429" for this suite. • [SLOW TEST:48.718 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":24,"skipped":291,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:05.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-0304b10e-7785-4664-b1d3-3b43335a04cf STEP: Creating a pod to test consume configMaps May 25 10:13:05.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969" in namespace "configmap-8372" to be "Succeeded or Failed" May 25 10:13:05.744: INFO: Pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.887682ms May 25 10:13:07.748: INFO: Pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007024445s May 25 10:13:09.779: INFO: Pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037130284s May 25 10:13:11.879: INFO: Pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137793409s STEP: Saw pod success May 25 10:13:11.879: INFO: Pod "pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969" satisfied condition "Succeeded or Failed" May 25 10:13:11.883: INFO: Trying to get logs from node v1.21-worker pod pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969 container agnhost-container: STEP: delete the pod May 25 10:13:11.991: INFO: Waiting for pod pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969 to disappear May 25 10:13:11.995: INFO: Pod pod-configmaps-06910ac1-3166-4a5f-a422-eb0b3f35b969 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:11.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8372" for this suite. • [SLOW TEST:6.299 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":475,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:00.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-565 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-565 I0525 10:13:00.593365 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-565, replica count: 2 I0525 10:13:03.644553 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:13:06.645572 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:13:06.645: INFO: Creating new exec pod May 25 10:13:09.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-565 exec execpodj4wsd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 25 10:13:12.111: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 25 10:13:12.111: INFO: stdout: "externalname-service-ktx6v" May 25 10:13:12.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-565 exec execpodj4wsd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.19.68 80' May 25 10:13:12.370: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.19.68 80\nConnection to 10.96.19.68 80 port [tcp/http] succeeded!\n" May 25 10:13:12.370: INFO: stdout: "externalname-service-z7hcb" May 25 10:13:12.370: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:12.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-565" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.853 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":22,"skipped":452,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:12.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:13:13.278: INFO: Checking APIGroup: apiregistration.k8s.io May 25 10:13:13.280: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 25 10:13:13.280: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.280: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 25 10:13:13.280: INFO: Checking APIGroup: apps May 25 10:13:13.281: INFO: PreferredVersion.GroupVersion: apps/v1 May 25 10:13:13.281: INFO: Versions found [{apps/v1 v1}] May 25 10:13:13.281: INFO: apps/v1 matches apps/v1 May 25 10:13:13.281: INFO: Checking APIGroup: events.k8s.io May 25 10:13:13.283: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 25 10:13:13.283: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.283: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 25 10:13:13.283: INFO: Checking APIGroup: authentication.k8s.io May 25 10:13:13.284: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 25 10:13:13.284: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.284: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 25 10:13:13.284: INFO: Checking APIGroup: authorization.k8s.io May 25 10:13:13.286: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 25 10:13:13.286: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.286: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 25 10:13:13.286: INFO: Checking APIGroup: autoscaling May 25 10:13:13.287: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 25 10:13:13.287: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 25 10:13:13.287: INFO: autoscaling/v1 matches autoscaling/v1 May 25 10:13:13.287: INFO: Checking APIGroup: batch May 25 10:13:13.289: INFO: PreferredVersion.GroupVersion: batch/v1 May 25 10:13:13.289: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 25 10:13:13.289: INFO: batch/v1 matches batch/v1 May 25 10:13:13.289: INFO: Checking APIGroup: certificates.k8s.io May 25 10:13:13.290: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 25 10:13:13.290: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.290: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 25 10:13:13.290: INFO: Checking APIGroup: networking.k8s.io May 25 10:13:13.292: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 25 10:13:13.292: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.292: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 25 10:13:13.292: INFO: Checking APIGroup: extensions May 25 10:13:13.293: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 25 10:13:13.293: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 25 10:13:13.293: INFO: extensions/v1beta1 matches extensions/v1beta1 May 25 10:13:13.293: INFO: Checking APIGroup: policy May 25 10:13:13.294: INFO: PreferredVersion.GroupVersion: policy/v1 May 25 10:13:13.294: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] May 25 10:13:13.294: INFO: policy/v1 matches policy/v1 May 25 10:13:13.294: INFO: Checking APIGroup: rbac.authorization.k8s.io May 25 10:13:13.295: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 25 10:13:13.295: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.295: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 25 10:13:13.295: INFO: Checking APIGroup: storage.k8s.io May 25 10:13:13.297: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 25 10:13:13.297: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.297: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 25 10:13:13.297: INFO: Checking APIGroup: admissionregistration.k8s.io May 25 10:13:13.298: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 25 10:13:13.298: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.298: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 25 10:13:13.298: INFO: Checking APIGroup: apiextensions.k8s.io May 25 10:13:13.299: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 25 10:13:13.299: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.299: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 25 10:13:13.299: INFO: Checking APIGroup: scheduling.k8s.io May 25 10:13:13.301: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 25 10:13:13.301: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.301: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 25 10:13:13.301: INFO: Checking APIGroup: coordination.k8s.io May 25 10:13:13.302: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 25 10:13:13.302: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.302: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 25 10:13:13.302: INFO: Checking APIGroup: node.k8s.io May 25 10:13:13.303: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 May 25 10:13:13.303: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.303: INFO: node.k8s.io/v1 matches node.k8s.io/v1 May 25 10:13:13.303: INFO: Checking APIGroup: discovery.k8s.io May 25 10:13:13.304: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 May 25 10:13:13.305: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.305: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 May 25 10:13:13.305: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io May 25 10:13:13.306: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 May 25 10:13:13.306: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] May 25 10:13:13.306: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 May 25 10:13:13.306: INFO: Checking APIGroup: k8s.cni.cncf.io May 25 10:13:13.307: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 25 10:13:13.307: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 25 10:13:13.307: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 25 10:13:13.307: INFO: Checking APIGroup: projectcontour.io May 25 10:13:13.309: INFO: PreferredVersion.GroupVersion: projectcontour.io/v1 May 25 10:13:13.309: INFO: Versions found [{projectcontour.io/v1 v1} {projectcontour.io/v1alpha1 v1alpha1}] May 25 10:13:13.309: INFO: projectcontour.io/v1 matches projectcontour.io/v1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:13.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7287" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":23,"skipped":462,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:07.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-f7c4560a-8a1e-4d1f-995a-49cf21ea0e5b STEP: Creating secret with name s-test-opt-upd-e049c201-a7e8-45b6-abdc-8eef0592869c STEP: Creating the pod May 25 10:13:08.005: INFO: The status of Pod pod-secrets-a805bd2c-4028-404b-8286-5ab147845eff is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:10.778: INFO: The status of Pod pod-secrets-a805bd2c-4028-404b-8286-5ab147845eff is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:12.008: INFO: The status of Pod pod-secrets-a805bd2c-4028-404b-8286-5ab147845eff is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:14.010: INFO: The status of Pod pod-secrets-a805bd2c-4028-404b-8286-5ab147845eff is Running (Ready = true) STEP: Deleting secret s-test-opt-del-f7c4560a-8a1e-4d1f-995a-49cf21ea0e5b STEP: Updating secret s-test-opt-upd-e049c201-a7e8-45b6-abdc-8eef0592869c STEP: Creating secret with name s-test-opt-create-754a6f87-b833-4d76-897b-ad10e2bfe737 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:16.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3971" for this suite. • [SLOW TEST:8.140 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":299,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:12.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 25 10:13:12.053: INFO: Pod name pod-release: Found 0 pods out of 1 May 25 10:13:17.057: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:18.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5733" for this suite. • [SLOW TEST:6.063 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":22,"skipped":485,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:16.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:13:16.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e" in namespace "projected-8838" to be "Succeeded or Failed" May 25 10:13:16.164: INFO: Pod "downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954299ms May 25 10:13:18.169: INFO: Pod "downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007746741s STEP: Saw pod success May 25 10:13:18.169: INFO: Pod "downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e" satisfied condition "Succeeded or Failed" May 25 10:13:18.173: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e container client-container: STEP: delete the pod May 25 10:13:18.186: INFO: Waiting for pod downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e to disappear May 25 10:13:18.189: INFO: Pod downwardapi-volume-d7af570c-2352-497b-b627-1b606dbe0b9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:18.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8838" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:18.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 10:13:20.155: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:20.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2124" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":491,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:20.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:13:20.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d" in namespace "projected-1287" to be "Succeeded or Failed" May 25 10:13:20.227: INFO: Pod "downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.0951ms May 25 10:13:22.231: INFO: Pod "downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00792868s STEP: Saw pod success May 25 10:13:22.231: INFO: Pod "downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d" satisfied condition "Succeeded or Failed" May 25 10:13:22.235: INFO: Trying to get logs from node v1.21-worker2 pod downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d container client-container: STEP: delete the pod May 25 10:13:22.249: INFO: Waiting for pod downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d to disappear May 25 10:13:22.253: INFO: Pod downwardapi-volume-7b6f8ec9-f7e1-4c2f-8ad8-2d0a1329179d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:22.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1287" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:01.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-fz6j STEP: Creating a pod to test atomic-volume-subpath May 25 10:13:01.465: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fz6j" in namespace "subpath-2742" to be "Succeeded or Failed" May 25 10:13:01.468: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.761364ms May 25 10:13:03.471: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006305107s May 25 10:13:05.476: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011028691s May 25 10:13:07.480: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 6.015098658s May 25 10:13:09.579: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.113530975s May 25 10:13:11.584: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.118604853s May 25 10:13:13.588: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.123153627s May 25 10:13:15.593: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.127806684s May 25 10:13:17.598: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.132738321s May 25 10:13:19.602: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.137079663s May 25 10:13:21.607: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.142267577s May 25 10:13:23.612: INFO: Pod "pod-subpath-test-projected-fz6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.147208358s STEP: Saw pod success May 25 10:13:23.612: INFO: Pod "pod-subpath-test-projected-fz6j" satisfied condition "Succeeded or Failed" May 25 10:13:23.616: INFO: Trying to get logs from node v1.21-worker pod pod-subpath-test-projected-fz6j container test-container-subpath-projected-fz6j: STEP: delete the pod May 25 10:13:23.632: INFO: Waiting for pod pod-subpath-test-projected-fz6j to disappear May 25 10:13:23.635: INFO: Pod pod-subpath-test-projected-fz6j no longer exists STEP: Deleting pod pod-subpath-test-projected-fz6j May 25 10:13:23.635: INFO: Deleting pod "pod-subpath-test-projected-fz6j" in namespace "subpath-2742" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:23.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2742" for this suite. • [SLOW TEST:22.230 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:18.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 25 10:13:18.276: INFO: The status of Pod annotationupdate3e9b5533-172b-47c5-bf41-fe6890b9f1bd is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:20.280: INFO: The status of Pod annotationupdate3e9b5533-172b-47c5-bf41-fe6890b9f1bd is Running (Ready = true) May 25 10:13:20.803: INFO: Successfully updated pod "annotationupdate3e9b5533-172b-47c5-bf41-fe6890b9f1bd" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:24.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1682" for this suite. • [SLOW TEST:6.604 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":331,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:22.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-7c5ee41b-156e-491d-8331-1428fa8ca82c in namespace container-probe-7850 May 25 10:09:24.336: INFO: Started pod test-webserver-7c5ee41b-156e-491d-8331-1428fa8ca82c in namespace container-probe-7850 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:09:24.339: INFO: Initial restart count of pod test-webserver-7c5ee41b-156e-491d-8331-1428fa8ca82c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:25.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7850" for this suite. • [SLOW TEST:242.934 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:22.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:13:22.798: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:13:25.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2322" for this suite. STEP: Destroying namespace "webhook-2322-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":25,"skipped":511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:25.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version May 25 10:13:26.014: INFO: Major version: 1 STEP: Confirm minor version May 25 10:13:26.014: INFO: cleanMinorVersion: 21 May 25 10:13:26.014: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:26.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-8769" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:24.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-28516738-91be-4f74-914b-cd1f123ca7cd STEP: Creating a pod to test consume secrets May 25 10:13:24.886: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2" in namespace "projected-2662" to be "Succeeded or Failed" May 25 10:13:24.890: INFO: Pod "pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253515ms May 25 10:13:26.893: INFO: Pod "pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006651549s May 25 10:13:28.896: INFO: Pod "pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009960876s STEP: Saw pod success May 25 10:13:28.896: INFO: Pod "pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2" satisfied condition "Succeeded or Failed" May 25 10:13:28.899: INFO: Trying to get logs from node v1.21-worker pod pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2 container projected-secret-volume-test: STEP: delete the pod May 25 10:13:28.913: INFO: Waiting for pod pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2 to disappear May 25 10:13:28.916: INFO: Pod pod-projected-secrets-df38ef97-3492-4fc7-a7de-3c83acba83b2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:28.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2662" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":332,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:28.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 25 10:13:28.984: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:33.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1402" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":29,"skipped":344,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":26,"skipped":555,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:26.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready May 25 10:13:26.066: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.066: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.072: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.072: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.089: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.089: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.096: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:26.096: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 25 10:13:27.596: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 25 10:13:27.596: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 25 10:13:28.832: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment May 25 10:13:28.840: INFO: observed event type ADDED STEP: waiting for Replicas to scale May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 0 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.843: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.848: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.848: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.857: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.857: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:28.871: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:28.871: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:28.882: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:28.882: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:31.236: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:31.236: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:31.251: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 STEP: listing Deployments May 25 10:13:31.255: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment May 25 10:13:31.268: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 STEP: fetching the DeploymentStatus May 25 10:13:31.277: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:31.277: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:31.287: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:31.301: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:31.306: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:32.617: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:32.630: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:32.635: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:32.643: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 25 10:13:34.438: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus May 25 10:13:34.476: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:34.476: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 1 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 2 May 25 10:13:34.477: INFO: observed Deployment test-deployment in namespace deployment-1278 with ReadyReplicas 3 STEP: deleting the Deployment May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.484: INFO: observed event type MODIFIED May 25 10:13:34.485: INFO: observed event type MODIFIED May 25 10:13:34.485: INFO: observed event type MODIFIED May 25 10:13:34.485: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:13:34.487: INFO: Log out all the ReplicaSets if there is no deployment created May 25 10:13:34.491: INFO: ReplicaSet "test-deployment-748588b7cd": &ReplicaSet{ObjectMeta:{test-deployment-748588b7cd deployment-1278 c2604537-6e43-483d-bdc9-77dbd32c5001 500754 4 2021-05-25 10:13:28 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 50380734-cef7-45ce-886c-7c00862af296 0xc0055ca0d7 0xc0055ca0d8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:13:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50380734-cef7-45ce-886c-7c00862af296\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 748588b7cd,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:748588b7cd test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055ca140 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:13:34.494: INFO: ReplicaSet "test-deployment-7b4c744884": &ReplicaSet{ObjectMeta:{test-deployment-7b4c744884 deployment-1278 5a023a1e-8c84-4f9a-a8e6-3d25dd228b76 500626 3 2021-05-25 10:13:26 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 50380734-cef7-45ce-886c-7c00862af296 0xc0055ca1a7 0xc0055ca1a8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50380734-cef7-45ce-886c-7c00862af296\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b4c744884,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b4c744884 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055ca210 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:13:34.497: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-1278 1cf65b28-c398-411c-8d5c-ced798eaeb03 500744 2 2021-05-25 10:13:31 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 50380734-cef7-45ce-886c-7c00862af296 0xc0055ca277 0xc0055ca278}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50380734-cef7-45ce-886c-7c00862af296\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055ca2e0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} May 25 10:13:34.501: INFO: pod: "test-deployment-85d87c6f4b-49pmb": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-49pmb test-deployment-85d87c6f4b- deployment-1278 1d13013b-3bc4-418d-9977-7e2b6029f1c0 500674 0 2021-05-25 10:13:31 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.247" ], "mac": "9a:c9:e5:0e:10:42", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.247" ], "mac": "9a:c9:e5:0e:10:42", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 1cf65b28-c398-411c-8d5c-ced798eaeb03 0xc0055ca927 0xc0055ca928}] [] [{kube-controller-manager Update v1 2021-05-25 10:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cf65b28-c398-411c-8d5c-ced798eaeb03\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:13:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hcsdx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcsdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.247,StartTime:2021-05-25 10:13:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:13:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://d7d857098ae097a29eac4978cf312d737e809119b55bbe9b1293982ace5f1f33,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:13:34.501: INFO: pod: "test-deployment-85d87c6f4b-jd74c": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-jd74c test-deployment-85d87c6f4b- deployment-1278 b6b7884d-e979-444d-8542-a1c870bbeb87 500743 0 2021-05-25 10:13:32 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.129" ], "mac": "b6:44:61:9b:21:a3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.129" ], "mac": "b6:44:61:9b:21:a3", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 1cf65b28-c398-411c-8d5c-ced798eaeb03 0xc0055cab27 0xc0055cab28}] [] [{kube-controller-manager Update v1 2021-05-25 10:13:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cf65b28-c398-411c-8d5c-ced798eaeb03\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:13:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:13:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.129\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kxrj8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxrj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.129,StartTime:2021-05-25 10:13:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:13:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3aab414a55530069d0f9a3df2c15943c3d60f89dc5c567b58f59747145dd9e7e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:34.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1278" for this suite. • [SLOW TEST:8.484 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":27,"skipped":555,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:25.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:36.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5356" for this suite. • [SLOW TEST:11.073 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:33.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 10:13:36.305: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:36.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2614" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":349,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:34.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:39.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9753" for this suite. • [SLOW TEST:5.167 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:09:33.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-8f03432b-9204-4445-a7c0-07b9c9a82594 in namespace container-probe-2627 May 25 10:09:39.780: INFO: Started pod busybox-8f03432b-9204-4445-a7c0-07b9c9a82594 in namespace container-probe-2627 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:09:39.784: INFO: Initial restart count of pod busybox-8f03432b-9204-4445-a7c0-07b9c9a82594 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:41.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2627" for this suite. • [SLOW TEST:247.572 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:36.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-3918 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3918 STEP: Deleting pre-stop pod May 25 10:13:47.593: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:47.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3918" for this suite. • [SLOW TEST:11.269 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":31,"skipped":362,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:47.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 25 10:13:48.330: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:13:48.344: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 25 10:13:50.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534428, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534428, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534428, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534428, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:13:53.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:53.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4496" for this suite. STEP: Destroying namespace "webhook-4496-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.142 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":32,"skipped":367,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":28,"skipped":557,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:39.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:13:40.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:13:43.735: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:53.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5625" for this suite. STEP: Destroying namespace "webhook-5625-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.206 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":29,"skipped":557,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:53.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added May 25 10:13:53.955: INFO: Found Service test-service-rcl8j in namespace services-2128 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] May 25 10:13:53.955: INFO: Service test-service-rcl8j created STEP: Getting /status May 25 10:13:53.959: INFO: Service test-service-rcl8j has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched May 25 10:13:53.966: INFO: observed Service test-service-rcl8j in namespace services-2128 with annotations: map[] & LoadBalancer: {[]} May 25 10:13:53.966: INFO: Found Service test-service-rcl8j in namespace services-2128 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} May 25 10:13:53.966: INFO: Service test-service-rcl8j has service status patched STEP: updating the ServiceStatus May 25 10:13:54.080: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated May 25 10:13:54.082: INFO: Observed Service test-service-rcl8j in namespace services-2128 with annotations: map[] & Conditions: {[]} May 25 10:13:54.082: INFO: Observed event: &Service{ObjectMeta:{test-service-rcl8j services-2128 a87c661c-7b3a-41e7-aefb-7d938269c997 501229 0 2021-05-25 10:13:53 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-05-25 10:13:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.96.134.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.134.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} May 25 10:13:54.082: INFO: Observed event: &Service{ObjectMeta:{test-service-rcl8j services-2128 a87c661c-7b3a-41e7-aefb-7d938269c997 501230 0 2021-05-25 10:13:53 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-05-25 10:13:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.96.134.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.134.167],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} May 25 10:13:54.083: INFO: Found Service test-service-rcl8j in namespace services-2128 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] May 25 10:13:54.083: INFO: Service test-service-rcl8j has service status updated STEP: patching the service STEP: watching for the Service to be patched May 25 10:13:54.091: INFO: observed Service test-service-rcl8j in namespace services-2128 with labels: map[test-service-static:true] May 25 10:13:54.091: INFO: observed Service test-service-rcl8j in namespace services-2128 with labels: map[test-service-static:true] May 25 10:13:54.091: INFO: observed Service test-service-rcl8j in namespace services-2128 with labels: map[test-service-static:true] May 25 10:13:54.092: INFO: observed Service test-service-rcl8j in namespace services-2128 with labels: map[test-service-static:true] May 25 10:13:54.092: INFO: Found Service test-service-rcl8j in namespace services-2128 with labels: map[test-service:patched test-service-static:true] May 25 10:13:54.092: INFO: Service test-service-rcl8j patched STEP: deleting the service STEP: watching for the Service to be deleted May 25 10:13:54.102: INFO: Observed event: ADDED May 25 10:13:54.102: INFO: Observed event: MODIFIED May 25 10:13:54.102: INFO: Observed event: MODIFIED May 25 10:13:54.102: INFO: Observed event: MODIFIED May 25 10:13:54.102: INFO: Observed event: MODIFIED May 25 10:13:54.102: INFO: Found Service test-service-rcl8j in namespace services-2128 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] May 25 10:13:54.102: INFO: Service test-service-rcl8j deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:54.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2128" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":30,"skipped":566,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:36.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 25 10:13:36.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 create -f -' May 25 10:13:36.742: INFO: stderr: "" May 25 10:13:36.742: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 10:13:36.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:36.875: INFO: stderr: "" May 25 10:13:36.875: INFO: stdout: "update-demo-nautilus-f4sqs update-demo-nautilus-j7lf8 " May 25 10:13:36.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-f4sqs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:37.005: INFO: stderr: "" May 25 10:13:37.005: INFO: stdout: "" May 25 10:13:37.005: INFO: update-demo-nautilus-f4sqs is created but not running May 25 10:13:42.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:42.162: INFO: stderr: "" May 25 10:13:42.163: INFO: stdout: "update-demo-nautilus-f4sqs update-demo-nautilus-j7lf8 " May 25 10:13:42.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-f4sqs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:42.307: INFO: stderr: "" May 25 10:13:42.307: INFO: stdout: "true" May 25 10:13:42.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-f4sqs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:42.437: INFO: stderr: "" May 25 10:13:42.437: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:42.437: INFO: validating pod update-demo-nautilus-f4sqs May 25 10:13:42.442: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:42.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:42.442: INFO: update-demo-nautilus-f4sqs is verified up and running May 25 10:13:42.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:42.581: INFO: stderr: "" May 25 10:13:42.581: INFO: stdout: "true" May 25 10:13:42.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:42.714: INFO: stderr: "" May 25 10:13:42.714: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:42.714: INFO: validating pod update-demo-nautilus-j7lf8 May 25 10:13:42.718: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:42.718: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:42.718: INFO: update-demo-nautilus-j7lf8 is verified up and running STEP: scaling down the replication controller May 25 10:13:42.724: INFO: scanned /root for discovery docs: May 25 10:13:42.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 25 10:13:42.893: INFO: stderr: "" May 25 10:13:42.893: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 10:13:42.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:43.041: INFO: stderr: "" May 25 10:13:43.041: INFO: stdout: "update-demo-nautilus-f4sqs update-demo-nautilus-j7lf8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 10:13:48.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:48.178: INFO: stderr: "" May 25 10:13:48.178: INFO: stdout: "update-demo-nautilus-j7lf8 " May 25 10:13:48.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:48.300: INFO: stderr: "" May 25 10:13:48.300: INFO: stdout: "true" May 25 10:13:48.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:48.430: INFO: stderr: "" May 25 10:13:48.430: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:48.430: INFO: validating pod update-demo-nautilus-j7lf8 May 25 10:13:48.433: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:48.433: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:48.433: INFO: update-demo-nautilus-j7lf8 is verified up and running STEP: scaling up the replication controller May 25 10:13:48.437: INFO: scanned /root for discovery docs: May 25 10:13:48.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 25 10:13:48.589: INFO: stderr: "" May 25 10:13:48.589: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 10:13:48.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:48.733: INFO: stderr: "" May 25 10:13:48.733: INFO: stdout: "update-demo-nautilus-9x89j update-demo-nautilus-j7lf8 " May 25 10:13:48.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-9x89j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:48.869: INFO: stderr: "" May 25 10:13:48.869: INFO: stdout: "" May 25 10:13:48.869: INFO: update-demo-nautilus-9x89j is created but not running May 25 10:13:53.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 25 10:13:54.107: INFO: stderr: "" May 25 10:13:54.107: INFO: stdout: "update-demo-nautilus-9x89j update-demo-nautilus-j7lf8 " May 25 10:13:54.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-9x89j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:54.506: INFO: stderr: "" May 25 10:13:54.506: INFO: stdout: "true" May 25 10:13:54.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-9x89j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:54.636: INFO: stderr: "" May 25 10:13:54.636: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:54.636: INFO: validating pod update-demo-nautilus-9x89j May 25 10:13:54.641: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:54.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:54.641: INFO: update-demo-nautilus-9x89j is verified up and running May 25 10:13:54.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 25 10:13:54.776: INFO: stderr: "" May 25 10:13:54.776: INFO: stdout: "true" May 25 10:13:54.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods update-demo-nautilus-j7lf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 25 10:13:54.915: INFO: stderr: "" May 25 10:13:54.915: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 25 10:13:54.915: INFO: validating pod update-demo-nautilus-j7lf8 May 25 10:13:54.919: INFO: got data: { "image": "nautilus.jpg" } May 25 10:13:54.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 10:13:54.919: INFO: update-demo-nautilus-j7lf8 is verified up and running STEP: using delete to clean up resources May 25 10:13:54.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 delete --grace-period=0 --force -f -' May 25 10:13:55.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 10:13:55.103: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 10:13:55.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get rc,svc -l name=update-demo --no-headers' May 25 10:13:55.299: INFO: stderr: "No resources found in kubectl-8202 namespace.\n" May 25 10:13:55.299: INFO: stdout: "" May 25 10:13:55.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8202 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 10:13:55.440: INFO: stderr: "" May 25 10:13:55.440: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:55.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8202" for this suite. • [SLOW TEST:19.117 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 25 10:13:54.158: INFO: Waiting up to 5m0s for pod "downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3" in namespace "downward-api-586" to be "Succeeded or Failed" May 25 10:13:54.161: INFO: Pod "downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.247207ms May 25 10:13:56.165: INFO: Pod "downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007126708s STEP: Saw pod success May 25 10:13:56.165: INFO: Pod "downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3" satisfied condition "Succeeded or Failed" May 25 10:13:56.168: INFO: Trying to get logs from node v1.21-worker pod downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3 container dapi-container: STEP: delete the pod May 25 10:13:56.181: INFO: Waiting for pod downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3 to disappear May 25 10:13:56.184: INFO: Pod downward-api-75012b08-0b7d-4ad7-8437-05f74163c9f3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:56.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-586" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":571,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:53.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:13:54.488: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:13:57.509: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:57.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2681" for this suite. STEP: Destroying namespace "webhook-2681-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":33,"skipped":377,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:55.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 10:13:55.499: INFO: Waiting up to 5m0s for pod "pod-cc15f601-b967-4df9-9a57-fa16560d0715" in namespace "emptydir-3712" to be "Succeeded or Failed" May 25 10:13:55.502: INFO: Pod "pod-cc15f601-b967-4df9-9a57-fa16560d0715": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044256ms May 25 10:13:57.507: INFO: Pod "pod-cc15f601-b967-4df9-9a57-fa16560d0715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008429874s May 25 10:13:59.512: INFO: Pod "pod-cc15f601-b967-4df9-9a57-fa16560d0715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013238705s STEP: Saw pod success May 25 10:13:59.512: INFO: Pod "pod-cc15f601-b967-4df9-9a57-fa16560d0715" satisfied condition "Succeeded or Failed" May 25 10:13:59.515: INFO: Trying to get logs from node v1.21-worker pod pod-cc15f601-b967-4df9-9a57-fa16560d0715 container test-container: STEP: delete the pod May 25 10:13:59.530: INFO: Waiting for pod pod-cc15f601-b967-4df9-9a57-fa16560d0715 to disappear May 25 10:13:59.535: INFO: Pod pod-cc15f601-b967-4df9-9a57-fa16560d0715 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:59.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3712" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:57.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-64691d96-68f4-4da0-8c20-aab921d1cf86 STEP: Creating a pod to test consume secrets May 25 10:13:57.620: INFO: Waiting up to 5m0s for pod "pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082" in namespace "secrets-2128" to be "Succeeded or Failed" May 25 10:13:57.623: INFO: Pod "pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694293ms May 25 10:13:59.627: INFO: Pod "pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00668068s STEP: Saw pod success May 25 10:13:59.627: INFO: Pod "pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082" satisfied condition "Succeeded or Failed" May 25 10:13:59.630: INFO: Trying to get logs from node v1.21-worker2 pod pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082 container secret-volume-test: STEP: delete the pod May 25 10:13:59.645: INFO: Waiting for pod pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082 to disappear May 25 10:13:59.648: INFO: Pod pod-secrets-ce1d43fb-9fe6-406f-a11e-2035ce8dc082 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:13:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2128" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":392,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:56.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium May 25 10:13:56.247: INFO: Waiting up to 5m0s for pod "pod-d89020ea-f2ca-41df-a254-44d565274633" in namespace "emptydir-4540" to be "Succeeded or Failed" May 25 10:13:56.250: INFO: Pod "pod-d89020ea-f2ca-41df-a254-44d565274633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637247ms May 25 10:13:58.254: INFO: Pod "pod-d89020ea-f2ca-41df-a254-44d565274633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00689488s May 25 10:14:00.258: INFO: Pod "pod-d89020ea-f2ca-41df-a254-44d565274633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010673718s STEP: Saw pod success May 25 10:14:00.258: INFO: Pod "pod-d89020ea-f2ca-41df-a254-44d565274633" satisfied condition "Succeeded or Failed" May 25 10:14:00.261: INFO: Trying to get logs from node v1.21-worker pod pod-d89020ea-f2ca-41df-a254-44d565274633 container test-container: STEP: delete the pod May 25 10:14:00.274: INFO: Waiting for pod pod-d89020ea-f2ca-41df-a254-44d565274633 to disappear May 25 10:14:00.276: INFO: Pod pod-d89020ea-f2ca-41df-a254-44d565274633 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:00.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4540" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:59.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:14:00.243: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:14:02.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534440, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534440, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534440, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534440, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:14:05.268: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:05.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5998" for this suite. STEP: Destroying namespace "webhook-5998-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.722 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":35,"skipped":407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:59.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:13:59.627: INFO: Waiting up to 5m0s for pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7" in namespace "downward-api-8552" to be "Succeeded or Failed" May 25 10:13:59.630: INFO: Pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087449ms May 25 10:14:01.633: INFO: Pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006938668s May 25 10:14:03.638: INFO: Pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011052499s May 25 10:14:05.642: INFO: Pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015913466s STEP: Saw pod success May 25 10:14:05.643: INFO: Pod "downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7" satisfied condition "Succeeded or Failed" May 25 10:14:05.646: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7 container client-container: STEP: delete the pod May 25 10:14:05.659: INFO: Waiting for pod downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7 to disappear May 25 10:14:05.663: INFO: Pod downwardapi-volume-485d9a99-75d7-45dd-8464-160a5f5712d7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8552" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:05.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:14:05.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f" in namespace "projected-7736" to be "Succeeded or Failed" May 25 10:14:05.545: INFO: Pod "downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683089ms May 25 10:14:07.549: INFO: Pod "downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007253934s May 25 10:14:09.554: INFO: Pod "downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012054745s STEP: Saw pod success May 25 10:14:09.554: INFO: Pod "downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f" satisfied condition "Succeeded or Failed" May 25 10:14:09.557: INFO: Trying to get logs from node v1.21-worker2 pod downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f container client-container: STEP: delete the pod May 25 10:14:09.575: INFO: Waiting for pod downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f to disappear May 25 10:14:09.578: INFO: Pod downwardapi-volume-cab65ca4-26a2-4218-9b62-ec0de5b3091f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:09.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7736" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":465,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:05.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-d898af1b-4236-4579-a2e9-961fedab1699 STEP: Creating a pod to test consume configMaps May 25 10:14:05.740: INFO: Waiting up to 5m0s for pod "pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41" in namespace "configmap-6860" to be "Succeeded or Failed" May 25 10:14:05.742: INFO: Pod "pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518357ms May 25 10:14:07.748: INFO: Pod "pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008051567s May 25 10:14:09.753: INFO: Pod "pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013149831s STEP: Saw pod success May 25 10:14:09.753: INFO: Pod "pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41" satisfied condition "Succeeded or Failed" May 25 10:14:09.757: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41 container agnhost-container: STEP: delete the pod May 25 10:14:09.772: INFO: Waiting for pod pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41 to disappear May 25 10:14:09.775: INFO: Pod pod-configmaps-f163ea0a-e108-4f6a-a178-5467a2958d41 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:09.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6860" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:41.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:13:43.422: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.426: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.430: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.433: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.447: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.451: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.455: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.459: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:43.467: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:13:48.472: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.475: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.479: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.482: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.493: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.497: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.501: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.504: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:48.511: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:13:53.476: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.482: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.485: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.489: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.499: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.503: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.506: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.510: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:53.517: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:13:58.473: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.477: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.481: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.484: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.495: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.498: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.502: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.505: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:13:58.513: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:14:03.472: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.476: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.480: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.484: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.495: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.499: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.507: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:03.514: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:14:08.473: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.476: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.480: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.484: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.495: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.499: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:08.514: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local jessie_udp@dns-test-service-2.dns-2869.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:14:13.505: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local from pod dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e: the server could not find the requested resource (get pods dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e) May 25 10:14:13.511: INFO: Lookups using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e failed for: [jessie_tcp@dns-test-service-2.dns-2869.svc.cluster.local] May 25 10:14:18.516: INFO: DNS probes using dns-2869/dns-test-fc845d57-48aa-463a-af2a-03db78aa2f9e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:18.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2869" for this suite. • [SLOW TEST:37.183 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":6,"skipped":98,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:58.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-cd3ef72f-be20-4a03-974f-83c719b076fe STEP: Creating the pod May 25 10:12:58.385: INFO: The status of Pod pod-configmaps-3ac1115e-4ced-492b-8bd1-2ddc3b5ee427 is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:00.388: INFO: The status of Pod pod-configmaps-3ac1115e-4ced-492b-8bd1-2ddc3b5ee427 is Pending, waiting for it to be Running (with Ready = true) May 25 10:13:02.390: INFO: The status of Pod pod-configmaps-3ac1115e-4ced-492b-8bd1-2ddc3b5ee427 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-cd3ef72f-be20-4a03-974f-83c719b076fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-661" for this suite. • [SLOW TEST:81.222 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:18.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-3ddcef17-ab84-4057-87db-5dc7da0e5be4 STEP: Creating a pod to test consume configMaps May 25 10:14:18.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8" in namespace "configmap-824" to be "Succeeded or Failed" May 25 10:14:18.593: INFO: Pod "pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754929ms May 25 10:14:20.597: INFO: Pod "pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006621116s STEP: Saw pod success May 25 10:14:20.597: INFO: Pod "pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8" satisfied condition "Succeeded or Failed" May 25 10:14:20.600: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8 container agnhost-container: STEP: delete the pod May 25 10:14:20.614: INFO: Waiting for pod pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8 to disappear May 25 10:14:20.616: INFO: Pod pod-configmaps-cda71b43-82eb-49dd-960d-aff1d0541fd8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:20.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-824" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:19.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:14:19.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf" in namespace "downward-api-3600" to be "Succeeded or Failed" May 25 10:14:19.663: INFO: Pod "downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.299794ms May 25 10:14:21.672: INFO: Pod "downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01180777s STEP: Saw pod success May 25 10:14:21.672: INFO: Pod "downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf" satisfied condition "Succeeded or Failed" May 25 10:14:21.675: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf container client-container: STEP: delete the pod May 25 10:14:21.688: INFO: Waiting for pod downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf to disappear May 25 10:14:21.691: INFO: Pod downwardapi-volume-95b26b03-9d3d-40c3-9412-c376d46319bf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:21.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3600" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":479,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:20.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:14:20.754: INFO: The status of Pod pod-secrets-839c5b65-199c-4863-8da4-af903ae13d9d is Pending, waiting for it to be Running (with Ready = true) May 25 10:14:22.763: INFO: The status of Pod pod-secrets-839c5b65-199c-4863-8da4-af903ae13d9d is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:22.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9365" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":8,"skipped":145,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:23.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:23.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8066" for this suite. • [SLOW TEST:60.442 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:24.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 25 10:14:24.613: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-535 d7205f14-aa5e-4786-b7c9-ca70f3741db6 502058 0 2021-05-25 10:14:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-25 10:14:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:14:24.613: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-535 d7205f14-aa5e-4786-b7c9-ca70f3741db6 502059 0 2021-05-25 10:14:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-25 10:14:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-535" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":32,"skipped":629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:22.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 10:14:22.846: INFO: Waiting up to 5m0s for pod "pod-90783442-68f2-4361-9b92-e5b0171f49a5" in namespace "emptydir-8269" to be "Succeeded or Failed" May 25 10:14:22.849: INFO: Pod "pod-90783442-68f2-4361-9b92-e5b0171f49a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58645ms May 25 10:14:24.880: INFO: Pod "pod-90783442-68f2-4361-9b92-e5b0171f49a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033748648s May 25 10:14:26.891: INFO: Pod "pod-90783442-68f2-4361-9b92-e5b0171f49a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045158125s STEP: Saw pod success May 25 10:14:26.891: INFO: Pod "pod-90783442-68f2-4361-9b92-e5b0171f49a5" satisfied condition "Succeeded or Failed" May 25 10:14:26.894: INFO: Trying to get logs from node v1.21-worker2 pod pod-90783442-68f2-4361-9b92-e5b0171f49a5 container test-container: STEP: delete the pod May 25 10:14:26.902: INFO: Waiting for pod pod-90783442-68f2-4361-9b92-e5b0171f49a5 to disappear May 25 10:14:26.904: INFO: Pod pod-90783442-68f2-4361-9b92-e5b0171f49a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:26.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8269" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":151,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:26.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-0b58806e-a909-406b-bb00-6da35a19abe8 STEP: Creating a pod to test consume configMaps May 25 10:14:26.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60" in namespace "projected-1988" to be "Succeeded or Failed" May 25 10:14:26.959: INFO: Pod "pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279245ms May 25 10:14:28.963: INFO: Pod "pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006601216s STEP: Saw pod success May 25 10:14:28.963: INFO: Pod "pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60" satisfied condition "Succeeded or Failed" May 25 10:14:28.965: INFO: Trying to get logs from node v1.21-worker2 pod pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60 container agnhost-container: STEP: delete the pod May 25 10:14:28.980: INFO: Waiting for pod pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60 to disappear May 25 10:14:28.983: INFO: Pod pod-projected-configmaps-6cbab93b-bce3-43a1-8cb5-987912194d60 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:28.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1988" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:21.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 25 10:14:22.334: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created May 25 10:14:24.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534462, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534462, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534462, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534462, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:14:27.595: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:14:27.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:30.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9692" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.126 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":31,"skipped":490,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:30.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-4029/configmap-test-2f874ba5-e6f2-42b1-af55-860181dc2296 STEP: Creating a pod to test consume configMaps May 25 10:14:30.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc" in namespace "configmap-4029" to be "Succeeded or Failed" May 25 10:14:30.903: INFO: Pod "pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890943ms May 25 10:14:32.909: INFO: Pod "pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007928032s STEP: Saw pod success May 25 10:14:32.909: INFO: Pod "pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc" satisfied condition "Succeeded or Failed" May 25 10:14:32.912: INFO: Trying to get logs from node v1.21-worker2 pod pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc container env-test: STEP: delete the pod May 25 10:14:32.927: INFO: Waiting for pod pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc to disappear May 25 10:14:32.930: INFO: Pod pod-configmaps-db001fb9-acc6-42c0-8b38-8cd263ce36bc no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:32.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4029" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":494,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:32.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info May 25 10:14:32.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6260 cluster-info' May 25 10:14:33.128: INFO: stderr: "" May 25 10:14:33.128: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.13.90:33295\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:33.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6260" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":33,"skipped":506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:29.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:29.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-4321 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-4891" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4321" for this suite. • [SLOW TEST:6.120 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":11,"skipped":227,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:09.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 25 10:14:09.832: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-981" for this suite. • [SLOW TEST:25.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:35.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server May 25 10:14:35.250: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4346 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4346" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":12,"skipped":228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:24.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod May 25 10:14:25.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 25 10:14:25.143: INFO: stderr: "" May 25 10:14:25.143: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. May 25 10:14:25.144: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 25 10:14:25.144: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3194" to be "running and ready, or succeeded" May 25 10:14:25.147: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846667ms May 25 10:14:27.151: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.006914141s May 25 10:14:27.151: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 25 10:14:27.151: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 25 10:14:27.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator' May 25 10:14:27.290: INFO: stderr: "" May 25 10:14:27.290: INFO: stdout: "I0525 10:14:26.340120 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/x5b4 308\nI0525 10:14:26.540331 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/cqsj 454\nI0525 10:14:26.740904 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/5952 499\nI0525 10:14:26.940244 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/rdl 364\nI0525 10:14:27.140665 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/s2f 321\n" STEP: limiting log lines May 25 10:14:27.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator --tail=1' May 25 10:14:27.892: INFO: stderr: "" May 25 10:14:27.892: INFO: stdout: "I0525 10:14:27.741063 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/bj8 355\n" May 25 10:14:27.892: INFO: got output "I0525 10:14:27.741063 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/bj8 355\n" STEP: limiting log bytes May 25 10:14:27.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator --limit-bytes=1' May 25 10:14:28.225: INFO: stderr: "" May 25 10:14:28.225: INFO: stdout: "I" May 25 10:14:28.225: INFO: got output "I" STEP: exposing timestamps May 25 10:14:28.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator --tail=1 --timestamps' May 25 10:14:28.523: INFO: stderr: "" May 25 10:14:28.523: INFO: stdout: "2021-05-25T10:14:28.340554181Z I0525 10:14:28.340298 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/m4vk 594\n" May 25 10:14:28.523: INFO: got output "2021-05-25T10:14:28.340554181Z I0525 10:14:28.340298 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/m4vk 594\n" STEP: restricting to a time range May 25 10:14:31.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator --since=1s' May 25 10:14:31.165: INFO: stderr: "" May 25 10:14:31.165: INFO: stdout: "I0525 10:14:30.340633 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/z2h 300\nI0525 10:14:30.541102 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/rlk 440\nI0525 10:14:30.740536 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/wnn 468\nI0525 10:14:30.941013 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/l496 558\nI0525 10:14:31.140262 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/x89 355\n" May 25 10:14:31.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 logs logs-generator logs-generator --since=24h' May 25 10:14:31.301: INFO: stderr: "" May 25 10:14:31.301: INFO: stdout: "I0525 10:14:26.340120 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/x5b4 308\nI0525 10:14:26.540331 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/cqsj 454\nI0525 10:14:26.740904 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/5952 499\nI0525 10:14:26.940244 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/rdl 364\nI0525 10:14:27.140665 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/s2f 321\nI0525 10:14:27.341152 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/mch 419\nI0525 10:14:27.540599 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/vp5 388\nI0525 10:14:27.741063 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/bj8 355\nI0525 10:14:27.940504 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/78s 269\nI0525 10:14:28.140948 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/n4hr 312\nI0525 10:14:28.340298 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/m4vk 594\nI0525 10:14:28.540668 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/l7k 242\nI0525 10:14:28.741041 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/xmbl 477\nI0525 10:14:28.940357 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/b9f 430\nI0525 10:14:29.140785 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/vgxs 357\nI0525 10:14:29.341218 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/n5q 383\nI0525 10:14:29.541086 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/jk6 296\nI0525 10:14:29.740544 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/pv8 588\nI0525 10:14:29.941032 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/j4d 417\nI0525 10:14:30.140254 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/8mcm 391\nI0525 10:14:30.340633 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/z2h 300\nI0525 10:14:30.541102 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/rlk 440\nI0525 10:14:30.740536 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/wnn 468\nI0525 10:14:30.941013 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/l496 558\nI0525 10:14:31.140262 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/x89 355\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 May 25 10:14:31.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3194 delete pod logs-generator' May 25 10:14:35.479: INFO: stderr: "" May 25 10:14:35.479: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3194" for this suite. • [SLOW TEST:10.825 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":33,"skipped":656,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:35.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:35.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7204" for this suite. •S ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":13,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:35.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created May 25 10:14:35.574: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 25 10:14:37.582: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 25 10:14:38.602: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:39.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3139" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":34,"skipped":678,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:39.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 10:14:39.702: INFO: Waiting up to 5m0s for pod "pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a" in namespace "emptydir-6326" to be "Succeeded or Failed" May 25 10:14:39.706: INFO: Pod "pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.588403ms May 25 10:14:41.710: INFO: Pod "pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007509332s STEP: Saw pod success May 25 10:14:41.710: INFO: Pod "pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a" satisfied condition "Succeeded or Failed" May 25 10:14:41.713: INFO: Trying to get logs from node v1.21-worker2 pod pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a container test-container: STEP: delete the pod May 25 10:14:41.733: INFO: Waiting for pod pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a to disappear May 25 10:14:41.739: INFO: Pod pod-b7ee3fe4-9b5f-4485-90e4-f20e61f4ea9a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:41.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6326" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":688,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:41.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:14:41.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 create -f -' May 25 10:14:42.204: INFO: stderr: "" May 25 10:14:42.204: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 25 10:14:42.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 create -f -' May 25 10:14:42.491: INFO: stderr: "" May 25 10:14:42.491: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 25 10:14:43.497: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:14:43.497: INFO: Found 0 / 1 May 25 10:14:44.496: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:14:44.496: INFO: Found 1 / 1 May 25 10:14:44.496: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 25 10:14:44.500: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:14:44.500: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 10:14:44.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 describe pod agnhost-primary-qnrnk' May 25 10:14:44.659: INFO: stderr: "" May 25 10:14:44.659: INFO: stdout: "Name: agnhost-primary-qnrnk\nNamespace: kubectl-8250\nPriority: 0\nNode: v1.21-worker2/172.18.0.2\nStart Time: Tue, 25 May 2021 10:14:42 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.22\"\n ],\n \"mac\": \"de:6d:d5:6b:46:b7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.22\"\n ],\n \"mac\": \"de:6d:d5:6b:46:b7\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.244.2.22\nIPs:\n IP: 10.244.2.22\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://be105118bfb9ebc46849207964f1a6b6af39b5839ccc15394c5318843c7862e6\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 25 May 2021 10:14:43 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqtbm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-cqtbm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-8250/agnhost-primary-qnrnk to v1.21-worker2\n Normal AddedInterface 2s multus Add eth0 [10.244.2.22/24]\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" May 25 10:14:44.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 describe rc agnhost-primary' May 25 10:14:44.829: INFO: stderr: "" May 25 10:14:44.829: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8250\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-qnrnk\n" May 25 10:14:44.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 describe service agnhost-primary' May 25 10:14:44.982: INFO: stderr: "" May 25 10:14:44.982: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8250\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.96.249.105\nIPs: 10.96.249.105\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.22:6379\nSession Affinity: None\nEvents: \n" May 25 10:14:44.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 describe node v1.21-control-plane' May 25 10:14:45.166: INFO: stderr: "" May 25 10:14:45.166: INFO: stdout: "Name: v1.21-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n ingress-ready=true\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=v1.21-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Mon, 24 May 2021 17:23:54 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: v1.21-control-plane\n AcquireTime: \n RenewTime: Tue, 25 May 2021 10:14:35 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 25 May 2021 10:11:44 +0000 Mon, 24 May 2021 17:23:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 25 May 2021 10:11:44 +0000 Mon, 24 May 2021 17:23:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 25 May 2021 10:11:44 +0000 Mon, 24 May 2021 17:23:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 25 May 2021 10:11:44 +0000 Mon, 24 May 2021 17:24:29 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.3\n Hostname: v1.21-control-plane\nCapacity:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nAllocatable:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nSystem Info:\n Machine ID: b1187601652c41a3b6c159b2e850901f\n System UUID: c02e3c8f-3b60-418f-b9cb-d607e75a042a\n Boot ID: be455131-27dd-43f1-b9be-d55ec4651321\n Kernel Version: 5.4.0-73-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.1\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/v1.21/v1.21-control-plane\nNon-terminated Pods: (12 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system create-loop-devs-b8n7x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n kube-system etcd-v1.21-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 16h\n kube-system kindnet-x82hf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 16h\n kube-system kube-apiserver-v1.21-control-plane 250m (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n kube-system kube-controller-manager-v1.21-control-plane 200m (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n kube-system kube-multus-ds-w7mzq 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 16h\n kube-system kube-proxy-c2smh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n kube-system kube-scheduler-v1.21-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n kube-system tune-sysctls-t9v46 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n local-path-storage local-path-provisioner-547f784dff-7qwzp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n metallb-system speaker-8ck5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\n projectcontour envoy-lg6jb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (0%) 200m (0%)\n memory 200Mi (0%) 100Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 25 10:14:45.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8250 describe namespace kubectl-8250' May 25 10:14:45.309: INFO: stderr: "" May 25 10:14:45.309: INFO: stdout: "Name: kubectl-8250\nLabels: e2e-framework=kubectl\n e2e-run=36845b2f-9f38-4a80-abca-4f986618219b\n kubernetes.io/metadata.name=kubectl-8250\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:45.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8250" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":36,"skipped":700,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:45.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-527cb59f-d117-42ce-b0cb-b6b0528acc30 STEP: Creating a pod to test consume secrets May 25 10:14:45.402: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9" in namespace "projected-679" to be "Succeeded or Failed" May 25 10:14:45.405: INFO: Pod "pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117381ms May 25 10:14:47.410: INFO: Pod "pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007856781s STEP: Saw pod success May 25 10:14:47.410: INFO: Pod "pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9" satisfied condition "Succeeded or Failed" May 25 10:14:47.413: INFO: Trying to get logs from node v1.21-worker2 pod pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9 container projected-secret-volume-test: STEP: delete the pod May 25 10:14:47.427: INFO: Waiting for pod pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9 to disappear May 25 10:14:47.430: INFO: Pod pod-projected-secrets-453715e6-4e66-40a1-a075-dd4aaa43b5d9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:47.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-679" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":717,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:35.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:14:36.464: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:14:39.483: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:14:55.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9322" for this suite. STEP: Destroying namespace "webhook-9322-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.237 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":14,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:33.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:14:35.258: INFO: DNS probes using dns-test-e30a054b-d333-415d-a734-e7aea869454c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:14:37.300: INFO: File jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local from pod dns-6149/dns-test-47d89dc9-2de1-4987-9db7-bda7f425af3e contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 10:14:37.300: INFO: Lookups using dns-6149/dns-test-47d89dc9-2de1-4987-9db7-bda7f425af3e failed for: [jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local] May 25 10:14:42.308: INFO: File jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local from pod dns-6149/dns-test-47d89dc9-2de1-4987-9db7-bda7f425af3e contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 10:14:42.309: INFO: Lookups using dns-6149/dns-test-47d89dc9-2de1-4987-9db7-bda7f425af3e failed for: [jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local] May 25 10:14:47.310: INFO: DNS probes using dns-test-47d89dc9-2de1-4987-9db7-bda7f425af3e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6149.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6149.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:15:03.363: INFO: DNS probes using dns-test-21010864-0e37-40f3-a729-57649d8b30e0 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:03.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6149" for this suite. • [SLOW TEST:30.196 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":34,"skipped":535,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:56.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8 May 25 10:14:57.195: INFO: Pod name my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8: Found 0 pods out of 1 May 25 10:15:02.199: INFO: Pod name my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8: Found 1 pods out of 1 May 25 10:15:02.199: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8" are running May 25 10:15:02.202: INFO: Pod "my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8-km8qh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:14:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:15:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:15:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-25 10:14:57 +0000 UTC Reason: Message:}]) May 25 10:15:02.203: INFO: Trying to dial the pod May 25 10:15:07.215: INFO: Controller my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8: Got expected result from replica 1 [my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8-km8qh]: "my-hostname-basic-e9eb0bee-0f01-4d5c-8e10-ee77a5067ac8-km8qh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7957" for this suite. • [SLOW TEST:10.308 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":15,"skipped":350,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:07.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 10:15:07.280: INFO: Waiting up to 5m0s for pod "pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1" in namespace "emptydir-3245" to be "Succeeded or Failed" May 25 10:15:07.283: INFO: Pod "pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.218775ms May 25 10:15:09.287: INFO: Pod "pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007886316s STEP: Saw pod success May 25 10:15:09.288: INFO: Pod "pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1" satisfied condition "Succeeded or Failed" May 25 10:15:09.291: INFO: Trying to get logs from node v1.21-worker pod pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1 container test-container: STEP: delete the pod May 25 10:15:09.307: INFO: Waiting for pod pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1 to disappear May 25 10:15:09.310: INFO: Pod pod-2040609e-bb1b-4a66-86b2-f09984fc5ab1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:09.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3245" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:00.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0525 10:14:10.472740 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:15:12.490: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 25 10:15:12.490: INFO: Deleting pod "simpletest-rc-to-be-deleted-2fdlk" in namespace "gc-8187" May 25 10:15:12.497: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vskn" in namespace "gc-8187" May 25 10:15:12.503: INFO: Deleting pod "simpletest-rc-to-be-deleted-7fhvc" in namespace "gc-8187" May 25 10:15:12.511: INFO: Deleting pod "simpletest-rc-to-be-deleted-gntd8" in namespace "gc-8187" May 25 10:15:12.518: INFO: Deleting pod "simpletest-rc-to-be-deleted-k965z" in namespace "gc-8187" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:12.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8187" for this suite. • [SLOW TEST:72.186 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":33,"skipped":608,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:35.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 25 10:14:37.290: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-710 PodName:var-expansion-ea20abcd-4bee-4bf4-96a3-42d250c8b5a4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:14:37.291: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 25 10:14:37.381: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-710 PodName:var-expansion-ea20abcd-4bee-4bf4-96a3-42d250c8b5a4 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:14:37.381: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 25 10:14:38.025: INFO: Successfully updated pod "var-expansion-ea20abcd-4bee-4bf4-96a3-42d250c8b5a4" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 25 10:14:38.028: INFO: Deleting pod "var-expansion-ea20abcd-4bee-4bf4-96a3-42d250c8b5a4" in namespace "var-expansion-710" May 25 10:14:38.032: INFO: Wait up to 5m0s for pod "var-expansion-ea20abcd-4bee-4bf4-96a3-42d250c8b5a4" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:14.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-710" for this suite. • [SLOW TEST:38.805 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":9,"skipped":63,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:14.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:14.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2220" for this suite. • ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:13:13.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-28146133-3e14-406a-8f9c-f7703642df31 in namespace container-probe-6189 May 25 10:13:15.367: INFO: Started pod liveness-28146133-3e14-406a-8f9c-f7703642df31 in namespace container-probe-6189 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:13:15.371: INFO: Initial restart count of pod liveness-28146133-3e14-406a-8f9c-f7703642df31 is 0 May 25 10:13:35.419: INFO: Restart count of pod container-probe-6189/liveness-28146133-3e14-406a-8f9c-f7703642df31 is now 1 (20.047942217s elapsed) May 25 10:13:55.524: INFO: Restart count of pod container-probe-6189/liveness-28146133-3e14-406a-8f9c-f7703642df31 is now 2 (40.152851174s elapsed) May 25 10:14:15.571: INFO: Restart count of pod container-probe-6189/liveness-28146133-3e14-406a-8f9c-f7703642df31 is now 3 (1m0.200499486s elapsed) May 25 10:14:35.895: INFO: Restart count of pod container-probe-6189/liveness-28146133-3e14-406a-8f9c-f7703642df31 is now 4 (1m20.523991974s elapsed) May 25 10:15:38.811: INFO: Restart count of pod container-probe-6189/liveness-28146133-3e14-406a-8f9c-f7703642df31 is now 5 (2m23.440365734s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:38.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6189" for this suite. • [SLOW TEST:145.510 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":463,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:09.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7817.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.17_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7817.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7817.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7817.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7817.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.164.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.164.17_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:15:11.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.484: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.492: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:11.518: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:16.525: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:16.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:16.562: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:16.566: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:16.596: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:21.524: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:21.528: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:21.562: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:21.566: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:21.596: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:28.787: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:28.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:30.498: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:30.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:30.532: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:31.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:31.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:31.561: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:31.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:31.888: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:36.525: INFO: Unable to read wheezy_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:36.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:36.565: INFO: Unable to read jessie_udp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:36.569: INFO: Unable to read jessie_tcp@dns-test-service.dns-7817.svc.cluster.local from pod dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1: the server could not find the requested resource (get pods dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1) May 25 10:15:36.602: INFO: Lookups using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 failed for: [wheezy_udp@dns-test-service.dns-7817.svc.cluster.local wheezy_tcp@dns-test-service.dns-7817.svc.cluster.local jessie_udp@dns-test-service.dns-7817.svc.cluster.local jessie_tcp@dns-test-service.dns-7817.svc.cluster.local] May 25 10:15:41.597: INFO: DNS probes using dns-7817/dns-test-bf85ac0d-eaea-491b-826a-5631a8d3a9e1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:41.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7817" for this suite. • [SLOW TEST:32.269 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:38.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2252 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2252 STEP: creating replication controller externalsvc in namespace services-2252 I0525 10:15:38.903731 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2252, replica count: 2 I0525 10:15:41.954887 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 25 10:15:41.971: INFO: Creating new exec pod May 25 10:15:43.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-2252 exec execpod57m6b -- /bin/sh -x -c nslookup clusterip-service.services-2252.svc.cluster.local' May 25 10:15:44.265: INFO: stderr: "+ nslookup clusterip-service.services-2252.svc.cluster.local\n" May 25 10:15:44.265: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2252.svc.cluster.local\tcanonical name = externalsvc.services-2252.svc.cluster.local.\nName:\texternalsvc.services-2252.svc.cluster.local\nAddress: 10.96.46.91\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2252, will wait for the garbage collector to delete the pods May 25 10:15:44.324: INFO: Deleting ReplicationController externalsvc took: 5.11337ms May 25 10:15:44.425: INFO: Terminating ReplicationController externalsvc pods took: 100.678054ms May 25 10:15:47.936: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:47.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2252" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.116 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":25,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:47.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-1ac9256b-fc33-441e-8062-a1aeed65c651 STEP: Creating a pod to test consume configMaps May 25 10:15:48.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549" in namespace "projected-3420" to be "Succeeded or Failed" May 25 10:15:48.037: INFO: Pod "pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549": Phase="Pending", Reason="", readiness=false. Elapsed: 3.068955ms May 25 10:15:50.041: INFO: Pod "pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006585703s STEP: Saw pod success May 25 10:15:50.041: INFO: Pod "pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549" satisfied condition "Succeeded or Failed" May 25 10:15:50.044: INFO: Trying to get logs from node v1.21-worker pod pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549 container agnhost-container: STEP: delete the pod May 25 10:15:50.058: INFO: Waiting for pod pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549 to disappear May 25 10:15:50.061: INFO: Pod pod-projected-configmaps-26f5ee95-2699-40d3-95b7-173bfb628549 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:50.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3420" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:50.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:50.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3507" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":27,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0525 10:14:56.084850 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:15:58.890: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:15:58.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1737" for this suite. • [SLOW TEST:71.449 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":38,"skipped":722,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:14:09.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0525 10:14:09.673024 25 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:01.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7197" for this suite. • [SLOW TEST:112.091 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":37,"skipped":478,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:10:47.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0525 10:10:47.750832 21 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:01.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5648" for this suite. • [SLOW TEST:314.060 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":11,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:01.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:01.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3184" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":12,"skipped":241,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:01.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:02.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8689" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":13,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:58.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-ef68d1a8-7065-4c87-859a-04d115e9e584 STEP: Creating secret with name s-test-opt-upd-2a80a2aa-7922-4e4b-b33d-59e652318b62 STEP: Creating the pod May 25 10:15:58.986: INFO: The status of Pod pod-projected-secrets-ba554467-1069-404e-b50e-f4e44c6bdd71 is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:00.991: INFO: The status of Pod pod-projected-secrets-ba554467-1069-404e-b50e-f4e44c6bdd71 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-ef68d1a8-7065-4c87-859a-04d115e9e584 STEP: Updating secret s-test-opt-upd-2a80a2aa-7922-4e4b-b33d-59e652318b62 STEP: Creating secret with name s-test-opt-create-b00febfc-928b-4a32-a902-559ceff47fc2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:03.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8136" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:03.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 25 10:15:03.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 502905 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:03.449: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 502905 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 25 10:15:13.457: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503130 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:13.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503130 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 25 10:15:23.883: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503376 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:23.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503376 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 25 10:15:33.890: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503395 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:33.891: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-754 f16da29f-b3b8-42da-b2e4-25dc30081227 503395 0 2021-05-25 10:15:03 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 25 10:15:43.898: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-754 0f91bf56-216d-40c5-a8e2-f57571b83ccf 503551 0 2021-05-25 10:15:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:43.898: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-754 0f91bf56-216d-40c5-a8e2-f57571b83ccf 503551 0 2021-05-25 10:15:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 25 10:15:53.904: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-754 0f91bf56-216d-40c5-a8e2-f57571b83ccf 503754 0 2021-05-25 10:15:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:15:53.904: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-754 0f91bf56-216d-40c5-a8e2-f57571b83ccf 503754 0 2021-05-25 10:15:43 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-25 10:15:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:03.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-754" for this suite. • [SLOW TEST:60.509 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":35,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:03.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 25 10:16:04.005: INFO: Waiting up to 5m0s for pod "security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8" in namespace "security-context-5621" to be "Succeeded or Failed" May 25 10:16:04.008: INFO: Pod "security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946856ms May 25 10:16:06.012: INFO: Pod "security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007164925s STEP: Saw pod success May 25 10:16:06.012: INFO: Pod "security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8" satisfied condition "Succeeded or Failed" May 25 10:16:06.015: INFO: Trying to get logs from node v1.21-worker pod security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8 container test-container: STEP: delete the pod May 25 10:16:06.029: INFO: Waiting for pod security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8 to disappear May 25 10:16:06.032: INFO: Pod security-context-e61ad8f5-ab2d-4ee8-ab32-49b07fe706e8 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:06.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5621" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":570,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:06.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:06.102: INFO: Got root ca configmap in namespace "svcaccounts-2880" May 25 10:16:06.111: INFO: Deleted root ca configmap in namespace "svcaccounts-2880" STEP: waiting for a new root ca configmap created May 25 10:16:06.615: INFO: Recreated root ca configmap in namespace "svcaccounts-2880" May 25 10:16:06.620: INFO: Updated root ca configmap in namespace "svcaccounts-2880" STEP: waiting for the root ca configmap reconciled May 25 10:16:07.179: INFO: Reconciled root ca configmap in namespace "svcaccounts-2880" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:07.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2880" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":37,"skipped":580,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:02.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 25 10:16:02.099: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:07.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6875" for this suite. • [SLOW TEST:5.519 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":17,"skipped":380,"failed":0} [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:41.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted May 25 10:16:01.905: INFO: EndpointSlice for Service endpointslice-2061/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:12.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2061" for this suite. • [SLOW TEST:30.548 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":18,"skipped":380,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:07.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-ccdf1d03-9c51-4d72-bccf-57899bf3fb46 STEP: Creating configMap with name cm-test-opt-upd-70cf2cc1-f5c0-45d7-940e-f4d58ab216b8 STEP: Creating the pod May 25 10:16:08.306: INFO: The status of Pod pod-configmaps-f3429edc-a9d3-4ba5-b532-5b95599c549a is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:10.310: INFO: The status of Pod pod-configmaps-f3429edc-a9d3-4ba5-b532-5b95599c549a is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-ccdf1d03-9c51-4d72-bccf-57899bf3fb46 STEP: Updating configmap cm-test-opt-upd-70cf2cc1-f5c0-45d7-940e-f4d58ab216b8 STEP: Creating configMap with name cm-test-opt-create-05333cda-5171-456d-9abd-12d568509c18 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:12.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1223" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:04.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-5ec5667d-4606-484a-bfe7-86fa6ce0ab1f in namespace container-probe-600 May 25 10:12:13.383: INFO: Started pod liveness-5ec5667d-4606-484a-bfe7-86fa6ce0ab1f in namespace container-probe-600 STEP: checking the pod's current state and verifying that restartCount is present May 25 10:12:13.387: INFO: Initial restart count of pod liveness-5ec5667d-4606-484a-bfe7-86fa6ce0ab1f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:14.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-600" for this suite. • [SLOW TEST:250.792 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":166,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:12.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 10:16:12.541: INFO: Waiting up to 5m0s for pod "pod-d9321b39-2a01-421d-a217-c0377d34502d" in namespace "emptydir-668" to be "Succeeded or Failed" May 25 10:16:12.543: INFO: Pod "pod-d9321b39-2a01-421d-a217-c0377d34502d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782018ms May 25 10:16:14.679: INFO: Pod "pod-d9321b39-2a01-421d-a217-c0377d34502d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.138585494s STEP: Saw pod success May 25 10:16:14.679: INFO: Pod "pod-d9321b39-2a01-421d-a217-c0377d34502d" satisfied condition "Succeeded or Failed" May 25 10:16:14.683: INFO: Trying to get logs from node v1.21-worker2 pod pod-d9321b39-2a01-421d-a217-c0377d34502d container test-container: STEP: delete the pod May 25 10:16:14.882: INFO: Waiting for pod pod-d9321b39-2a01-421d-a217-c0377d34502d to disappear May 25 10:16:14.886: INFO: Pod pod-d9321b39-2a01-421d-a217-c0377d34502d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-668" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:15.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:16:17.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb" in namespace "projected-1082" to be "Succeeded or Failed" May 25 10:16:17.391: INFO: Pod "downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073961ms May 25 10:16:19.879: INFO: Pod "downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495473967s May 25 10:16:21.885: INFO: Pod "downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.500911854s STEP: Saw pod success May 25 10:16:21.885: INFO: Pod "downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb" satisfied condition "Succeeded or Failed" May 25 10:16:21.888: INFO: Trying to get logs from node v1.21-worker2 pod downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb container client-container: STEP: delete the pod May 25 10:16:21.980: INFO: Waiting for pod downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb to disappear May 25 10:16:21.984: INFO: Pod downwardapi-volume-5a83724e-4f60-493c-9f5f-90af499515cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:21.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1082" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":431,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:15.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command May 25 10:16:16.981: INFO: Waiting up to 5m0s for pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c" in namespace "var-expansion-1329" to be "Succeeded or Failed" May 25 10:16:16.984: INFO: Pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152024ms May 25 10:16:19.178: INFO: Pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197334591s May 25 10:16:21.184: INFO: Pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202935276s May 25 10:16:23.188: INFO: Pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207682601s STEP: Saw pod success May 25 10:16:23.189: INFO: Pod "var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c" satisfied condition "Succeeded or Failed" May 25 10:16:23.192: INFO: Trying to get logs from node v1.21-worker2 pod var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c container dapi-container: STEP: delete the pod May 25 10:16:23.207: INFO: Waiting for pod var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c to disappear May 25 10:16:23.210: INFO: Pod var-expansion-a8548e2a-42f4-4cae-9a26-f9929ad6ae9c no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:23.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1329" for this suite. • [SLOW TEST:7.831 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":168,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:22.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:22.040: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 25 10:16:22.050: INFO: The status of Pod pod-logs-websocket-b205d2de-3f72-4410-99cf-dca58d255382 is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:24.054: INFO: The status of Pod pod-logs-websocket-b205d2de-3f72-4410-99cf-dca58d255382 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:24.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8163" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":435,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:12.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 25 10:16:12.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9645 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' May 25 10:16:12.378: INFO: stderr: "" May 25 10:16:12.378: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 May 25 10:16:12.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9645 delete pods e2e-test-httpd-pod' May 25 10:16:25.455: INFO: stderr: "" May 25 10:16:25.455: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:25.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9645" for this suite. • [SLOW TEST:13.251 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":19,"skipped":383,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:23.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:23.924: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:26.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:27.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7846" for this suite. STEP: Destroying namespace "webhook-7846-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":15,"skipped":173,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:03.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:27.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6185" for this suite. • [SLOW TEST:23.983 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":749,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:25.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6201.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6201.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:16:27.557: INFO: DNS probes using dns-6201/dns-test-d407d8f1-fad2-47b6-b007-951c348e5b6d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:27.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6201" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":20,"skipped":388,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:27.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 25 10:16:27.711: INFO: starting watch STEP: patching STEP: updating May 25 10:16:27.720: INFO: waiting for watch events with expected annotations May 25 10:16:27.720: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:27.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-6144" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":41,"skipped":756,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:27.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-e81d529d-5b90-4921-ac29-6183a2ec761f STEP: Creating a pod to test consume secrets May 25 10:16:27.109: INFO: Waiting up to 5m0s for pod "pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94" in namespace "secrets-8072" to be "Succeeded or Failed" May 25 10:16:27.112: INFO: Pod "pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556963ms May 25 10:16:29.116: INFO: Pod "pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006465039s May 25 10:16:31.120: INFO: Pod "pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010913237s STEP: Saw pod success May 25 10:16:31.120: INFO: Pod "pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94" satisfied condition "Succeeded or Failed" May 25 10:16:31.123: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94 container secret-volume-test: STEP: delete the pod May 25 10:16:31.142: INFO: Waiting for pod pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94 to disappear May 25 10:16:31.145: INFO: Pod pod-secrets-042dfafb-55ef-467f-a49f-0fefa78daf94 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:31.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8072" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:31.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 25 10:16:31.231: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8457 94cd5e29-873e-4abe-8747-0999d8017086 504738 0 2021-05-25 10:16:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:16:31.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8457 94cd5e29-873e-4abe-8747-0999d8017086 504739 0 2021-05-25 10:16:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 25 10:16:31.245: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8457 94cd5e29-873e-4abe-8747-0999d8017086 504740 0 2021-05-25 10:16:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 25 10:16:31.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8457 94cd5e29-873e-4abe-8747-0999d8017086 504741 0 2021-05-25 10:16:31 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:31.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8457" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":17,"skipped":200,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:27.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:27.627: INFO: Creating simple deployment test-new-deployment May 25 10:16:27.637: INFO: deployment "test-new-deployment" doesn't have the required revision set May 25 10:16:29.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534587, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534587, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534587, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534587, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:16:31.681: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-2496 4ece000d-6e13-4446-ae4e-27a81e6b6cf6 504764 3 2021-05-25 10:16:27 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-05-25 10:16:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a7cc58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-25 10:16:31 +0000 UTC,LastTransitionTime:2021-05-25 10:16:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-05-25 10:16:31 +0000 UTC,LastTransitionTime:2021-05-25 10:16:27 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 10:16:31.685: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-2496 c0324bd3-d488-4e0e-8195-de9effb6b968 504769 2 2021-05-25 10:16:27 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 4ece000d-6e13-4446-ae4e-27a81e6b6cf6 0xc000a7d037 0xc000a7d038}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ece000d-6e13-4446-ae4e-27a81e6b6cf6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a7d0a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 10:16:31.689: INFO: Pod "test-new-deployment-847dcfb7fb-49slm" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-49slm test-new-deployment-847dcfb7fb- deployment-2496 2d3251f4-180d-45c9-bd73-67ce4bcc6740 504768 0 2021-05-25 10:16:31 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb c0324bd3-d488-4e0e-8195-de9effb6b968 0xc004b75a17 0xc004b75a18}] [] [{kube-controller-manager Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0324bd3-d488-4e0e-8195-de9effb6b968\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xh8f7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xh8f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 10:16:31.689: INFO: Pod "test-new-deployment-847dcfb7fb-7wckz" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-7wckz test-new-deployment-847dcfb7fb- deployment-2496 049f1942-7cae-4a01-bf36-ed5342adba45 504759 0 2021-05-25 10:16:27 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.178" ], "mac": "1a:92:e0:55:ac:75", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.178" ], "mac": "1a:92:e0:55:ac:75", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb c0324bd3-d488-4e0e-8195-de9effb6b968 0xc004b75b80 0xc004b75b81}] [] [{kube-controller-manager Update v1 2021-05-25 10:16:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0324bd3-d488-4e0e-8195-de9effb6b968\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:16:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2jmvq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jmvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.178,StartTime:2021-05-25 10:16:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:16:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f2303014f26b86928e7c88eaf47413bc12c06572311a2a1583e09c806e9cf489,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:31.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2496" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":21,"skipped":401,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition May 25 10:16:31.745: INFO: Waiting up to 5m0s for pod "var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695" in namespace "var-expansion-3946" to be "Succeeded or Failed" May 25 10:16:31.748: INFO: Pod "var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248186ms May 25 10:16:33.753: INFO: Pod "var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008248825s May 25 10:16:35.758: INFO: Pod "var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013145818s STEP: Saw pod success May 25 10:16:35.758: INFO: Pod "var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695" satisfied condition "Succeeded or Failed" May 25 10:16:35.761: INFO: Trying to get logs from node v1.21-worker2 pod var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695 container dapi-container: STEP: delete the pod May 25 10:16:35.776: INFO: Waiting for pod var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695 to disappear May 25 10:16:35.779: INFO: Pod var-expansion-71cbcbdb-82b0-46d2-8874-fbcff137a695 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3946" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":405,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:01.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5429 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5429;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5429 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5429;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5429.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5429.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5429.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5429.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5429.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5429.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.23.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.23.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.23.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.23.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5429 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5429;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5429 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5429;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5429.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5429.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5429.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5429.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5429.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5429.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5429.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5429.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5429.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.23.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.23.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.23.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.23.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 10:16:05.807: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.812: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.820: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.863: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.867: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.871: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.875: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.879: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.883: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.886: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.890: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:05.914: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:10.921: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.925: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.937: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.941: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.945: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.969: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.973: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.976: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.980: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.985: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.992: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:10.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:11.019: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:16.078: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:16.280: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:16.284: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:16.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:16.982: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:16.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.391: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.416: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.422: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.426: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.432: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:17.457: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:20.980: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:20.984: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:20.988: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:20.991: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:20.995: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:20.999: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.003: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.032: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.035: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.038: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.045: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.048: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.052: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:21.080: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:25.920: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.924: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.934: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.941: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.945: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.973: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.976: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.979: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.982: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.985: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.988: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.991: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:25.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:26.014: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:30.920: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.925: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.943: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.947: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.973: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.977: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.981: INFO: Unable to read jessie_udp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429 from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.988: INFO: Unable to read jessie_udp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:30.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:31.000: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc from pod dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2: the server could not find the requested resource (get pods dns-test-627c488d-f369-4fd6-b092-52406901a0c2) May 25 10:16:31.022: INFO: Lookups using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5429 wheezy_tcp@dns-test-service.dns-5429 wheezy_udp@dns-test-service.dns-5429.svc wheezy_tcp@dns-test-service.dns-5429.svc wheezy_udp@_http._tcp.dns-test-service.dns-5429.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5429.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5429 jessie_tcp@dns-test-service.dns-5429 jessie_udp@dns-test-service.dns-5429.svc jessie_tcp@dns-test-service.dns-5429.svc jessie_udp@_http._tcp.dns-test-service.dns-5429.svc jessie_tcp@_http._tcp.dns-test-service.dns-5429.svc] May 25 10:16:36.023: INFO: DNS probes using dns-5429/dns-test-627c488d-f369-4fd6-b092-52406901a0c2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:36.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5429" for this suite. • [SLOW TEST:34.364 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":494,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:36.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 25 10:16:36.183: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 25 10:16:36.187: INFO: starting watch STEP: patching STEP: updating May 25 10:16:36.198: INFO: waiting for watch events with expected annotations May 25 10:16:36.198: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:36.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9538" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":39,"skipped":509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:36.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching May 25 10:16:36.325: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating May 25 10:16:36.341: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:36.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-5923" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":40,"skipped":539,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:07.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-1670 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 10:16:07.787: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 25 10:16:08.185: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:10.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:12.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:14.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:16.280: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:18.379: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:20.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:22.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:24.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:26.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:28.189: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:30.190: INFO: The status of Pod netserver-0 is Running (Ready = false) May 25 10:16:32.189: INFO: The status of Pod netserver-0 is Running (Ready = true) May 25 10:16:32.194: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 25 10:16:34.225: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 25 10:16:34.225: INFO: Going to poll 10.244.1.173 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 25 10:16:34.228: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.173 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1670 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:16:34.228: INFO: >>> kubeConfig: /root/.kube/config May 25 10:16:35.365: INFO: Found all 1 expected endpoints: [netserver-0] May 25 10:16:35.365: INFO: Going to poll 10.244.2.35 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 25 10:16:35.370: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.35 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1670 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 25 10:16:35.370: INFO: >>> kubeConfig: /root/.kube/config May 25 10:16:36.496: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:36.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1670" for this suite. • [SLOW TEST:29.306 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":585,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:35.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:16:35.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b" in namespace "downward-api-9535" to be "Succeeded or Failed" May 25 10:16:35.861: INFO: Pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183426ms May 25 10:16:37.866: INFO: Pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008007144s May 25 10:16:39.981: INFO: Pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123865432s May 25 10:16:42.079: INFO: Pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221737679s STEP: Saw pod success May 25 10:16:42.079: INFO: Pod "downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b" satisfied condition "Succeeded or Failed" May 25 10:16:42.083: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b container client-container: STEP: delete the pod May 25 10:16:42.179: INFO: Waiting for pod downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b to disappear May 25 10:16:42.279: INFO: Pod downwardapi-volume-33b6082b-7072-47e9-8ca4-79b552b3016b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:42.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9535" for this suite. • [SLOW TEST:7.165 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":417,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:36.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 25 10:16:36.415: INFO: Waiting up to 5m0s for pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b" in namespace "downward-api-7111" to be "Succeeded or Failed" May 25 10:16:36.418: INFO: Pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628918ms May 25 10:16:38.422: INFO: Pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007384956s May 25 10:16:40.427: INFO: Pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011978433s May 25 10:16:42.679: INFO: Pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264194724s STEP: Saw pod success May 25 10:16:42.679: INFO: Pod "downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b" satisfied condition "Succeeded or Failed" May 25 10:16:42.683: INFO: Trying to get logs from node v1.21-worker pod downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b container dapi-container: STEP: delete the pod May 25 10:16:43.285: INFO: Waiting for pod downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b to disappear May 25 10:16:43.289: INFO: Pod downward-api-ab6e2307-fedd-4dee-b36c-c340ba99fe3b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:43.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7111" for this suite. • [SLOW TEST:6.920 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":544,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:43.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:43.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-758" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":42,"skipped":546,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:27.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-975 STEP: creating service affinity-clusterip in namespace services-975 STEP: creating replication controller affinity-clusterip in namespace services-975 I0525 10:16:27.840283 28 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-975, replica count: 3 I0525 10:16:30.891448 28 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 10:16:33.893007 28 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 10:16:33.899: INFO: Creating new exec pod May 25 10:16:36.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-975 exec execpod-affinityjkvfc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' May 25 10:16:37.130: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 25 10:16:37.130: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:16:37.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-975 exec execpod-affinityjkvfc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.96.14.203 80' May 25 10:16:37.381: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.96.14.203 80\nConnection to 10.96.14.203 80 port [tcp/http] succeeded!\n" May 25 10:16:37.381: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 25 10:16:37.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=services-975 exec execpod-affinityjkvfc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.14.203:80/ ; done' May 25 10:16:37.716: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.14.203:80/\n" May 25 10:16:37.716: INFO: stdout: "\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn\naffinity-clusterip-4wxrn" May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Received response from host: affinity-clusterip-4wxrn May 25 10:16:37.716: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-975, will wait for the garbage collector to delete the pods May 25 10:16:37.782: INFO: Deleting ReplicationController affinity-clusterip took: 4.933395ms May 25 10:16:37.882: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.176006ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:45.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-975" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.704 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":42,"skipped":768,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:45.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 25 10:16:45.554: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:45.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7633" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":43,"skipped":774,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:42.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:16:43.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42" in namespace "projected-6487" to be "Succeeded or Failed" May 25 10:16:43.316: INFO: Pod "downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917937ms May 25 10:16:45.321: INFO: Pod "downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00793053s May 25 10:16:47.326: INFO: Pod "downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013130839s STEP: Saw pod success May 25 10:16:47.326: INFO: Pod "downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42" satisfied condition "Succeeded or Failed" May 25 10:16:47.330: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42 container client-container: STEP: delete the pod May 25 10:16:47.343: INFO: Waiting for pod downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42 to disappear May 25 10:16:47.345: INFO: Pod downwardapi-volume-916ec6af-e5b0-4fb4-b47b-b8ca456c9a42 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:47.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6487" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":419,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:36.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:37.499: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:16:39.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:16:41.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534597, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:44.785: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:44.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8870-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:47.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9527" for this suite. STEP: Destroying namespace "webhook-9527-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.512 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":39,"skipped":587,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:45.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:45.607: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:51.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-57" for this suite. • [SLOW TEST:6.270 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":44,"skipped":776,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:48.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c08cd8e4-395e-41ef-85ce-52b4ef7d4129 STEP: Creating a pod to test consume secrets May 25 10:16:48.103: INFO: Waiting up to 5m0s for pod "pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9" in namespace "secrets-7979" to be "Succeeded or Failed" May 25 10:16:48.106: INFO: Pod "pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128372ms May 25 10:16:50.111: INFO: Pod "pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008047845s May 25 10:16:52.115: INFO: Pod "pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012009324s STEP: Saw pod success May 25 10:16:52.115: INFO: Pod "pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9" satisfied condition "Succeeded or Failed" May 25 10:16:52.118: INFO: Trying to get logs from node v1.21-worker pod pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9 container secret-volume-test: STEP: delete the pod May 25 10:16:52.131: INFO: Waiting for pod pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9 to disappear May 25 10:16:52.134: INFO: Pod pod-secrets-2ece9fa8-bc2a-42cd-b5ab-9162cb1b9df9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:52.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7979" for this suite. STEP: Destroying namespace "secret-namespace-1980" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":595,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:24.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:52.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4840" for this suite. • [SLOW TEST:28.070 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":19,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:43.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 25 10:16:44.499: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 25 10:16:46.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534604, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534604, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534604, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534604, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:49.522: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:49.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:52.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1756" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.383 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":43,"skipped":560,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:12.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0525 10:15:52.622598 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 25 10:16:54.642: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 25 10:16:54.642: INFO: Deleting pod "simpletest.rc-2dqsx" in namespace "gc-8056" May 25 10:16:54.649: INFO: Deleting pod "simpletest.rc-4fwj7" in namespace "gc-8056" May 25 10:16:54.656: INFO: Deleting pod "simpletest.rc-75trj" in namespace "gc-8056" May 25 10:16:54.663: INFO: Deleting pod "simpletest.rc-8l2qv" in namespace "gc-8056" May 25 10:16:54.669: INFO: Deleting pod "simpletest.rc-c6r9s" in namespace "gc-8056" May 25 10:16:54.676: INFO: Deleting pod "simpletest.rc-d8npr" in namespace "gc-8056" May 25 10:16:54.683: INFO: Deleting pod "simpletest.rc-hkpkb" in namespace "gc-8056" May 25 10:16:54.688: INFO: Deleting pod "simpletest.rc-q8kxt" in namespace "gc-8056" May 25 10:16:54.693: INFO: Deleting pod "simpletest.rc-r5stf" in namespace "gc-8056" May 25 10:16:54.698: INFO: Deleting pod "simpletest.rc-z7wpp" in namespace "gc-8056" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:54.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8056" for this suite. • [SLOW TEST:102.161 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":34,"skipped":617,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:52.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 10:16:52.841: INFO: Waiting up to 5m0s for pod "pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91" in namespace "emptydir-9934" to be "Succeeded or Failed" May 25 10:16:52.844: INFO: Pod "pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390451ms May 25 10:16:54.847: INFO: Pod "pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005707045s STEP: Saw pod success May 25 10:16:54.847: INFO: Pod "pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91" satisfied condition "Succeeded or Failed" May 25 10:16:54.850: INFO: Trying to get logs from node v1.21-worker2 pod pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91 container test-container: STEP: delete the pod May 25 10:16:54.864: INFO: Waiting for pod pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91 to disappear May 25 10:16:54.867: INFO: Pod pod-b36a53d9-dcf3-4e59-9e39-bfe936575f91 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:54.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9934" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":581,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:47.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 25 10:16:47.911: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:47.926: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 25 10:16:49.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534607, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534607, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534607, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534607, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:52.950: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 25 10:16:54.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=webhook-1108 attach --namespace=webhook-1108 to-be-attached-pod -i -c=container1' May 25 10:16:55.137: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:55.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1108" for this suite. STEP: Destroying namespace "webhook-1108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.802 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":25,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:52.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 25 10:16:52.226: INFO: The status of Pod annotationupdate3c995869-37da-4a24-947b-0c2c9a21a05c is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:54.230: INFO: The status of Pod annotationupdate3c995869-37da-4a24-947b-0c2c9a21a05c is Pending, waiting for it to be Running (with Ready = true) May 25 10:16:56.231: INFO: The status of Pod annotationupdate3c995869-37da-4a24-947b-0c2c9a21a05c is Running (Ready = true) May 25 10:16:56.754: INFO: Successfully updated pod "annotationupdate3c995869-37da-4a24-947b-0c2c9a21a05c" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:16:58.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7079" for this suite. • [SLOW TEST:6.597 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":446,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:52.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:52.681: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 25 10:16:54.690: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:16:56.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534612, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:59.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:00.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8525" for this suite. STEP: Destroying namespace "webhook-8525-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":41,"skipped":671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:00.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:00.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5285" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":42,"skipped":704,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:51.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:51.879: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 25 10:16:51.884: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 10:16:56.890: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 10:16:56.890: INFO: Creating deployment "test-rolling-update-deployment" May 25 10:16:56.895: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 25 10:16:56.900: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 25 10:16:58.908: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 25 10:16:58.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534616, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534616, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534616, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534616, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:00.914: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 25 10:17:00.922: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4330 d7d77871-0d1b-4c2e-962a-3d050fdd8442 506168 1 2021-05-25 10:16:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-05-25 10:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0012bb738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-25 10:16:56 +0000 UTC,LastTransitionTime:2021-05-25 10:16:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-05-25 10:17:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 10:17:00.926: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-4330 3454653a-6e8f-4d0c-bf67-dc355ec0cbbf 506155 1 2021-05-25 10:16:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d7d77871-0d1b-4c2e-962a-3d050fdd8442 0xc0012bbbf7 0xc0012bbbf8}] [] [{kube-controller-manager Update apps/v1 2021-05-25 10:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7d77871-0d1b-4c2e-962a-3d050fdd8442\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0012bbc88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 10:17:00.927: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 25 10:17:00.927: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4330 1e5ef49b-2a1a-4681-a0a3-6f09976080a6 506167 2 2021-05-25 10:16:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d7d77871-0d1b-4c2e-962a-3d050fdd8442 0xc0012bbae7 0xc0012bbae8}] [] [{e2e.test Update apps/v1 2021-05-25 10:16:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-25 10:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7d77871-0d1b-4c2e-962a-3d050fdd8442\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0012bbb88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 10:17:00.931: INFO: Pod "test-rolling-update-deployment-585b757574-qnw4z" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-qnw4z test-rolling-update-deployment-585b757574- deployment-4330 a20631a7-8c42-49f6-a68e-4cb3432ecf3b 506151 0 2021-05-25 10:16:56 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.195" ], "mac": "6e:4f:53:d7:fe:43", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.195" ], "mac": "6e:4f:53:d7:fe:43", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 3454653a-6e8f-4d0c-bf67-dc355ec0cbbf 0xc00430ed07 0xc00430ed08}] [] [{kube-controller-manager Update v1 2021-05-25 10:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3454653a-6e8f-4d0c-bf67-dc355ec0cbbf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-25 10:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-25 10:17:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4glc5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4glc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:v1.21-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-25 10:16:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.195,StartTime:2021-05-25 10:16:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-25 10:16:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://9ec815218fced34fe6947ba88a9aa44ffce096efc90968ee9507b4ed6310a431,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:00.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4330" for this suite. • [SLOW TEST:9.083 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":45,"skipped":778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:54.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:16:54.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58" in namespace "downward-api-9931" to be "Succeeded or Failed" May 25 10:16:54.923: INFO: Pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436995ms May 25 10:16:56.927: INFO: Pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006179143s May 25 10:16:58.931: INFO: Pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010622456s May 25 10:17:00.934: INFO: Pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013843816s STEP: Saw pod success May 25 10:17:00.935: INFO: Pod "downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58" satisfied condition "Succeeded or Failed" May 25 10:17:00.937: INFO: Trying to get logs from node v1.21-worker pod downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58 container client-container: STEP: delete the pod May 25 10:17:00.951: INFO: Waiting for pod downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58 to disappear May 25 10:17:00.953: INFO: Pod downwardapi-volume-ae8f7532-253f-44df-b089-4fc525565e58 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:00.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9931" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:55.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:55.672: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:16:58.689: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:16:58.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6229-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:01.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4405" for this suite. STEP: Destroying namespace "webhook-4405-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.748 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":26,"skipped":453,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:00.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments May 25 10:17:01.001: INFO: Waiting up to 5m0s for pod "client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d" in namespace "containers-8512" to be "Succeeded or Failed" May 25 10:17:01.004: INFO: Pod "client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037227ms May 25 10:17:03.009: INFO: Pod "client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008520999s STEP: Saw pod success May 25 10:17:03.009: INFO: Pod "client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d" satisfied condition "Succeeded or Failed" May 25 10:17:03.013: INFO: Trying to get logs from node v1.21-worker2 pod client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d container agnhost-container: STEP: delete the pod May 25 10:17:03.026: INFO: Waiting for pod client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d to disappear May 25 10:17:03.028: INFO: Pod client-containers-133bd2e8-109f-4197-b54f-f4c230ff809d no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:03.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8512" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":722,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:03.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:03.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7918" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:03.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image May 25 10:17:03.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2116 create -f -' May 25 10:17:03.686: INFO: stderr: "" May 25 10:17:03.686: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 25 10:17:03.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2116 diff -f -' May 25 10:17:04.000: INFO: rc: 1 May 25 10:17:04.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2116 delete -f -' May 25 10:17:04.182: INFO: stderr: "" May 25 10:17:04.182: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:04.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2116" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":45,"skipped":754,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:04.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-2324f5e5-ca87-459b-ad06-cc7bff58bd23 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:04.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6026" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":46,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:54.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 25 10:16:54.766: INFO: namespace kubectl-8022 May 25 10:16:54.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8022 create -f -' May 25 10:16:55.175: INFO: stderr: "" May 25 10:16:55.175: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 25 10:16:56.181: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:16:56.181: INFO: Found 0 / 1 May 25 10:16:57.179: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:16:57.179: INFO: Found 0 / 1 May 25 10:16:58.179: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:16:58.179: INFO: Found 0 / 1 May 25 10:16:59.180: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:16:59.180: INFO: Found 0 / 1 May 25 10:17:00.178: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:17:00.178: INFO: Found 1 / 1 May 25 10:17:00.178: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 25 10:17:00.181: INFO: Selector matched 1 pods for map[app:agnhost] May 25 10:17:00.181: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 10:17:00.181: INFO: wait on agnhost-primary startup in kubectl-8022 May 25 10:17:00.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8022 logs agnhost-primary-2kd9b agnhost-primary' May 25 10:17:00.308: INFO: stderr: "" May 25 10:17:00.308: INFO: stdout: "Paused\n" STEP: exposing RC May 25 10:17:00.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8022 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 25 10:17:00.469: INFO: stderr: "" May 25 10:17:00.469: INFO: stdout: "service/rm2 exposed\n" May 25 10:17:00.472: INFO: Service rm2 in namespace kubectl-8022 found. STEP: exposing service May 25 10:17:02.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8022 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 25 10:17:02.611: INFO: stderr: "" May 25 10:17:02.611: INFO: stdout: "service/rm3 exposed\n" May 25 10:17:02.614: INFO: Service rm3 in namespace kubectl-8022 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:04.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8022" for this suite. • [SLOW TEST:9.891 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":35,"skipped":631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:58.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 25 10:16:59.379: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:16:59.393: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 25 10:17:01.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:03.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534619, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:17:06.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:06.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7097" for this suite. STEP: Destroying namespace "webhook-7097-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":21,"skipped":454,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:01.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 10:17:01.047: INFO: Waiting up to 5m0s for pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc" in namespace "emptydir-2471" to be "Succeeded or Failed" May 25 10:17:01.050: INFO: Pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.616786ms May 25 10:17:03.054: INFO: Pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006630708s May 25 10:17:05.059: INFO: Pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011322721s May 25 10:17:07.063: INFO: Pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015606617s STEP: Saw pod success May 25 10:17:07.063: INFO: Pod "pod-45033853-d106-4eb8-9d14-66ffc6a8cecc" satisfied condition "Succeeded or Failed" May 25 10:17:07.066: INFO: Trying to get logs from node v1.21-worker pod pod-45033853-d106-4eb8-9d14-66ffc6a8cecc container test-container: STEP: delete the pod May 25 10:17:07.079: INFO: Waiting for pod pod-45033853-d106-4eb8-9d14-66ffc6a8cecc to disappear May 25 10:17:07.081: INFO: Pod pod-45033853-d106-4eb8-9d14-66ffc6a8cecc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:07.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2471" for this suite. • [SLOW TEST:6.071 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":830,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:01.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:13.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6718" for this suite. • [SLOW TEST:11.215 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":27,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:06.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 10:17:13.794: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:13.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6016" for this suite. • [SLOW TEST:7.229 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":456,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:04.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:17:05.449: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 10:17:07.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:09.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:11.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534625, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:17:14.473: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:14.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-457" for this suite. STEP: Destroying namespace "webhook-457-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.926 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":36,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:13.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 25 10:17:13.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748" in namespace "projected-7053" to be "Succeeded or Failed" May 25 10:17:13.883: INFO: Pod "downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.879299ms May 25 10:17:15.888: INFO: Pod "downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006977225s STEP: Saw pod success May 25 10:17:15.888: INFO: Pod "downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748" satisfied condition "Succeeded or Failed" May 25 10:17:15.890: INFO: Trying to get logs from node v1.21-worker2 pod downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748 container client-container: STEP: delete the pod May 25 10:17:15.903: INFO: Waiting for pod downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748 to disappear May 25 10:17:15.905: INFO: Pod downwardapi-volume-3581c93d-5e40-4a49-9516-e0ee43807748 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:15.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7053" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:04.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:17:04.312: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 10:17:08.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-370 --namespace=crd-publish-openapi-370 create -f -' May 25 10:17:10.517: INFO: stderr: "" May 25 10:17:10.517: INFO: stdout: "e2e-test-crd-publish-openapi-944-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 10:17:10.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-370 --namespace=crd-publish-openapi-370 delete e2e-test-crd-publish-openapi-944-crds test-cr' May 25 10:17:10.784: INFO: stderr: "" May 25 10:17:10.784: INFO: stdout: "e2e-test-crd-publish-openapi-944-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 25 10:17:10.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-370 --namespace=crd-publish-openapi-370 apply -f -' May 25 10:17:11.133: INFO: stderr: "" May 25 10:17:11.133: INFO: stdout: "e2e-test-crd-publish-openapi-944-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 10:17:11.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-370 --namespace=crd-publish-openapi-370 delete e2e-test-crd-publish-openapi-944-crds test-cr' May 25 10:17:11.251: INFO: stderr: "" May 25 10:17:11.251: INFO: stdout: "e2e-test-crd-publish-openapi-944-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 10:17:11.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-370 explain e2e-test-crd-publish-openapi-944-crds' May 25 10:17:11.546: INFO: stderr: "" May 25 10:17:11.546: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-944-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:16.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-370" for this suite. • [SLOW TEST:12.673 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":47,"skipped":777,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:15.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:17:16.021: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 25 10:17:18.049: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:19.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-570" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":24,"skipped":516,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:07.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 10:17:07.886: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 25 10:17:10.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:12.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:14.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 10:17:16.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534627, loc:(*time.Location)(0x9dc0820)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 10:17:19.097: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 25 10:17:19.115: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:19.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8579" for this suite. STEP: Destroying namespace "webhook-8579-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.051 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":47,"skipped":848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:19.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:17:19.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7671 version' May 25 10:17:19.367: INFO: stderr: "" May 25 10:17:19.367: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:18:45Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-18T01:10:20Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:19.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7671" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":48,"skipped":888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:16.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:17:16.993: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 25 10:17:17.004: INFO: The status of Pod pod-exec-websocket-757bb1ab-eedf-4655-a8e9-a4db52e9b8bd is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:19.007: INFO: The status of Pod pod-exec-websocket-757bb1ab-eedf-4655-a8e9-a4db52e9b8bd is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:21.008: INFO: The status of Pod pod-exec-websocket-757bb1ab-eedf-4655-a8e9-a4db52e9b8bd is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:23.008: INFO: The status of Pod pod-exec-websocket-757bb1ab-eedf-4655-a8e9-a4db52e9b8bd is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:23.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3631" for this suite. • [SLOW TEST:6.170 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":779,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:19.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota May 25 10:17:19.126: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 10:17:24.129: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-220" for this suite. • [SLOW TEST:5.062 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":25,"skipped":528,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:19.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:26.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-170" for this suite. • [SLOW TEST:7.442 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":49,"skipped":924,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:50.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2410 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 25 10:15:50.311: INFO: Found 0 stateful pods, waiting for 3 May 25 10:16:00.316: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:00.316: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:00.316: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 25 10:16:00.344: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 25 10:16:10.383: INFO: Updating stateful set ss2 May 25 10:16:10.389: INFO: Waiting for Pod statefulset-2410/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted May 25 10:16:20.420: INFO: Found 1 stateful pods, waiting for 3 May 25 10:16:30.425: INFO: Found 2 stateful pods, waiting for 3 May 25 10:16:40.424: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:40.424: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:40.424: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 25 10:16:50.425: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:50.425: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:16:50.425: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 25 10:16:50.451: INFO: Updating stateful set ss2 May 25 10:16:50.457: INFO: Waiting for Pod statefulset-2410/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 25 10:17:00.483: INFO: Updating stateful set ss2 May 25 10:17:00.489: INFO: Waiting for StatefulSet statefulset-2410/ss2 to complete update May 25 10:17:00.489: INFO: Waiting for Pod statefulset-2410/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:17:10.499: INFO: Deleting all statefulset in ns statefulset-2410 May 25 10:17:10.505: INFO: Scaling statefulset ss2 to 0 May 25 10:17:30.521: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:17:30.524: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:30.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2410" for this suite. • [SLOW TEST:100.276 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":28,"skipped":550,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:27.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 25 10:17:27.392: INFO: Waiting up to 5m0s for pod "downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d" in namespace "downward-api-2231" to be "Succeeded or Failed" May 25 10:17:27.394: INFO: Pod "downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317816ms May 25 10:17:29.399: INFO: Pod "downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006384895s May 25 10:17:31.405: INFO: Pod "downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01316809s STEP: Saw pod success May 25 10:17:31.405: INFO: Pod "downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d" satisfied condition "Succeeded or Failed" May 25 10:17:31.485: INFO: Trying to get logs from node v1.21-worker pod downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d container dapi-container: STEP: delete the pod May 25 10:17:32.083: INFO: Waiting for pod downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d to disappear May 25 10:17:32.180: INFO: Pod downward-api-12e1d691-e2da-4ef3-88fa-36a178eb072d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:32.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2231" for this suite. • [SLOW TEST:5.142 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:32.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 25 10:17:32.296: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 25 10:17:32.302: INFO: starting watch STEP: patching STEP: updating May 25 10:17:32.312: INFO: waiting for watch events with expected annotations May 25 10:17:32.312: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:32.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-6739" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":51,"skipped":1048,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:01.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 25 10:17:01.036: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6251" for this suite. • [SLOW TEST:31.745 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":46,"skipped":616,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:12:38.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0525 10:12:38.200222 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:38.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-9510" for this suite. • [SLOW TEST:300.066 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":22,"skipped":380,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:23.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:39.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3782" for this suite. • [SLOW TEST:16.381 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":49,"skipped":787,"failed":0} SSSSSSSS ------------------------------ May 25 10:17:39.552: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:38.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-33acf6eb-dd93-4337-9836-4e7087b9ac45 STEP: Creating a pod to test consume configMaps May 25 10:17:38.281: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f" in namespace "projected-9693" to be "Succeeded or Failed" May 25 10:17:38.284: INFO: Pod "pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.04138ms May 25 10:17:40.289: INFO: Pod "pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007598994s STEP: Saw pod success May 25 10:17:40.289: INFO: Pod "pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f" satisfied condition "Succeeded or Failed" May 25 10:17:40.292: INFO: Trying to get logs from node v1.21-worker pod pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f container agnhost-container: STEP: delete the pod May 25 10:17:40.310: INFO: Waiting for pod pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f to disappear May 25 10:17:40.314: INFO: Pod pod-projected-configmaps-86547b98-92da-4c1a-bee9-a4fdf487a63f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:40.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9693" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":384,"failed":0} May 25 10:17:40.325: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:24.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 25 10:17:24.187: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:26.286: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:28.191: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:30.192: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 25 10:17:30.203: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:32.206: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 25 10:17:34.207: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 25 10:17:34.215: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:34.219: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:36.219: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:36.223: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:38.220: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:38.223: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:40.219: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:40.224: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:42.219: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:42.223: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:44.219: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:44.224: INFO: Pod pod-with-prestop-exec-hook still exists May 25 10:17:46.221: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 10:17:46.225: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:46.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7676" for this suite. • [SLOW TEST:22.085 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":529,"failed":0} May 25 10:17:46.244: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:32.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:49.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2436" for this suite. • [SLOW TEST:17.081 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":52,"skipped":1051,"failed":0} May 25 10:17:49.456: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:30.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-q727 STEP: Creating a pod to test atomic-volume-subpath May 25 10:17:30.611: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-q727" in namespace "subpath-2737" to be "Succeeded or Failed" May 25 10:17:30.614: INFO: Pod "pod-subpath-test-secret-q727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884451ms May 25 10:17:32.619: INFO: Pod "pod-subpath-test-secret-q727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007427541s May 25 10:17:34.623: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 4.012197494s May 25 10:17:36.628: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 6.017134594s May 25 10:17:38.633: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 8.021695812s May 25 10:17:40.638: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 10.027178796s May 25 10:17:42.643: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 12.031289162s May 25 10:17:44.650: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 14.03918647s May 25 10:17:46.655: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 16.044189921s May 25 10:17:48.661: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 18.049536955s May 25 10:17:50.666: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 20.054450433s May 25 10:17:52.671: INFO: Pod "pod-subpath-test-secret-q727": Phase="Running", Reason="", readiness=true. Elapsed: 22.059775363s May 25 10:17:54.675: INFO: Pod "pod-subpath-test-secret-q727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.06422016s STEP: Saw pod success May 25 10:17:54.676: INFO: Pod "pod-subpath-test-secret-q727" satisfied condition "Succeeded or Failed" May 25 10:17:54.678: INFO: Trying to get logs from node v1.21-worker2 pod pod-subpath-test-secret-q727 container test-container-subpath-secret-q727: STEP: delete the pod May 25 10:17:54.693: INFO: Waiting for pod pod-subpath-test-secret-q727 to disappear May 25 10:17:54.695: INFO: Pod pod-subpath-test-secret-q727 no longer exists STEP: Deleting pod pod-subpath-test-secret-q727 May 25 10:17:54.695: INFO: Deleting pod "pod-subpath-test-secret-q727" in namespace "subpath-2737" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:54.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2737" for this suite. • [SLOW TEST:24.142 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":559,"failed":0} May 25 10:17:54.709: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:15:14.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod May 25 10:17:14.676: INFO: Successfully updated pod "var-expansion-c841d7fe-def3-49fb-ad6e-a1eccf3011b8" STEP: waiting for pod running STEP: deleting the pod gracefully May 25 10:17:16.683: INFO: Deleting pod "var-expansion-c841d7fe-def3-49fb-ad6e-a1eccf3011b8" in namespace "var-expansion-9612" May 25 10:17:16.689: INFO: Wait up to 5m0s for pod "var-expansion-c841d7fe-def3-49fb-ad6e-a1eccf3011b8" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:17:56.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9612" for this suite. • [SLOW TEST:162.586 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":11,"skipped":74,"failed":0} May 25 10:17:56.711: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:32.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 25 10:17:32.813: INFO: PodSpec: initContainers in spec.initContainers May 25 10:18:16.600: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dfec7f90-46f8-40d5-94de-11000ef7750c", GenerateName:"", Namespace:"init-container-1166", SelfLink:"", UID:"0fcb2d06-ab54-49c3-87a0-ce5814193541", ResourceVersion:"507892", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757534652, loc:(*time.Location)(0x9dc0820)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"813202541"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.211\"\n ],\n \"mac\": \"06:dc:97:47:71:3d\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.211\"\n ],\n \"mac\": \"06:dc:97:47:71:3d\",\n \"default\": true,\n \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bf4b28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bf4b40)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bf4b58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bf4b70)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bf4b88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bf4ba0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-57zq4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001703800), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-57zq4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-57zq4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-57zq4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0055baf58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"v1.21-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ae0460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055bafe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055bb000)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0055bb008), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0055bb00c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc005565190), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534652, loc:(*time.Location)(0x9dc0820)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534652, loc:(*time.Location)(0x9dc0820)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534652, loc:(*time.Location)(0x9dc0820)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757534652, loc:(*time.Location)(0x9dc0820)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.1.211", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.211"}}, StartTime:(*v1.Time)(0xc003bf4bd0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ae0540)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ae0620)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://6b27ded0ada9e1f662251ed43d241247c4666af0d3ff1819effcb564e4bf2d4d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001703b40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001703a80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0055bb08f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:18:16.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1166" for this suite. • [SLOW TEST:43.827 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":47,"skipped":631,"failed":0} May 25 10:18:16.611: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:13.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:17:13.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 25 10:17:15.972: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:15Z]] name:name1 resourceVersion:506831 uid:140fa5c1-cad5-45ac-a650-c3e39114e7c9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 25 10:17:26.178: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:25Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:25Z]] name:name2 resourceVersion:507199 uid:7660107d-5755-4f00-9dee-4d196c4c809c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 25 10:17:36.187: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:36Z]] name:name1 resourceVersion:507494 uid:140fa5c1-cad5-45ac-a650-c3e39114e7c9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 25 10:17:46.197: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:46Z]] name:name2 resourceVersion:507647 uid:7660107d-5755-4f00-9dee-4d196c4c809c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 25 10:17:56.207: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:36Z]] name:name1 resourceVersion:507745 uid:140fa5c1-cad5-45ac-a650-c3e39114e7c9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 25 10:18:06.381: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-25T10:17:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-25T10:17:46Z]] name:name2 resourceVersion:507813 uid:7660107d-5755-4f00-9dee-4d196c4c809c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:18:16.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2985" for this suite. • [SLOW TEST:63.544 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":28,"skipped":549,"failed":0} May 25 10:18:16.906: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:16:31.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8380 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8380 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8380 May 25 10:16:31.306: INFO: Found 0 stateful pods, waiting for 1 May 25 10:16:41.378: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 25 10:16:41.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:16:41.721: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:16:41.721: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:16:41.721: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:16:41.777: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 25 10:16:51.784: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 10:16:51.784: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:16:51.800: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999559s May 25 10:16:52.805: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995302067s May 25 10:16:53.810: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990845594s May 25 10:16:54.814: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986871426s May 25 10:16:55.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982936085s May 25 10:16:56.821: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978916008s May 25 10:16:57.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974202746s May 25 10:16:58.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96994302s May 25 10:16:59.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96556454s May 25 10:17:00.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.823755ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8380 May 25 10:17:01.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:17:02.184: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:17:02.184: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:17:02.184: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:17:02.188: INFO: Found 1 stateful pods, waiting for 3 May 25 10:17:12.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:12.192: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:12.192: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 25 10:17:22.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:22.193: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:22.193: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 25 10:17:22.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:17:22.547: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:17:22.547: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:17:22.547: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:17:22.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:17:22.785: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:17:22.785: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:17:22.785: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:17:22.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:17:23.024: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:17:23.024: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:17:23.024: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:17:23.024: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:17:23.027: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 25 10:17:33.036: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 10:17:33.036: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 25 10:17:33.036: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 25 10:17:33.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999529s May 25 10:17:34.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994878637s May 25 10:17:35.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989408547s May 25 10:17:36.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984084878s May 25 10:17:37.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978273848s May 25 10:17:38.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972371004s May 25 10:17:39.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966573016s May 25 10:17:40.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961506989s May 25 10:17:41.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95587089s May 25 10:17:42.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.078719ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8380 May 25 10:17:43.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:17:43.370: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:17:43.370: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:17:43.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:17:43.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:17:43.616: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:17:43.616: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:17:43.616: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:17:43.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-8380 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:17:43.851: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:17:43.851: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:17:43.851: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:17:43.852: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:18:23.878: INFO: Deleting all statefulset in ns statefulset-8380 May 25 10:18:23.881: INFO: Scaling statefulset ss to 0 May 25 10:18:23.989: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:18:23.992: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:18:24.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8380" for this suite. • [SLOW TEST:112.757 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":18,"skipped":201,"failed":0} May 25 10:18:24.019: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:17:14.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3985 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 25 10:17:14.674: INFO: Found 0 stateful pods, waiting for 3 May 25 10:17:24.679: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:24.679: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:24.679: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 25 10:17:34.679: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:34.679: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:34.679: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 25 10:17:34.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-3985 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:17:34.908: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:17:34.908: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:17:34.908: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 25 10:17:44.946: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 25 10:17:54.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-3985 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:17:55.174: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:17:55.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:17:55.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:18:05.289: INFO: Waiting for StatefulSet statefulset-3985/ss2 to complete update May 25 10:18:05.290: INFO: Waiting for Pod statefulset-3985/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 25 10:18:05.290: INFO: Waiting for Pod statefulset-3985/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 25 10:18:05.290: INFO: Waiting for Pod statefulset-3985/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 25 10:18:15.298: INFO: Waiting for StatefulSet statefulset-3985/ss2 to complete update May 25 10:18:15.298: INFO: Waiting for Pod statefulset-3985/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 25 10:18:25.298: INFO: Waiting for StatefulSet statefulset-3985/ss2 to complete update STEP: Rolling back to a previous revision May 25 10:18:35.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-3985 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 10:18:35.571: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 25 10:18:35.571: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 10:18:35.571: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 10:18:45.609: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 25 10:18:55.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=statefulset-3985 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 10:18:56.172: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 25 10:18:56.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 10:18:56.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 10:19:06.197: INFO: Waiting for StatefulSet statefulset-3985/ss2 to complete update May 25 10:19:06.197: INFO: Waiting for Pod statefulset-3985/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 May 25 10:19:06.197: INFO: Waiting for Pod statefulset-3985/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 May 25 10:19:16.208: INFO: Waiting for StatefulSet statefulset-3985/ss2 to complete update May 25 10:19:16.208: INFO: Waiting for Pod statefulset-3985/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 25 10:19:26.206: INFO: Deleting all statefulset in ns statefulset-3985 May 25 10:19:26.209: INFO: Scaling statefulset ss2 to 0 May 25 10:19:56.228: INFO: Waiting for statefulset status.replicas updated to 0 May 25 10:19:56.231: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:19:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3985" for this suite. • [SLOW TEST:161.617 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":37,"skipped":678,"failed":0} May 25 10:19:56.255: INFO: Running AfterSuite actions on all nodes May 25 10:19:56.256: INFO: Running AfterSuite actions on node 1 May 25 10:19:56.256: INFO: Skipping dumping logs from cluster Ran 320 of 5771 Specs in 702.409 seconds SUCCESS! -- 320 Passed | 0 Failed | 0 Pending | 5451 Skipped Ginkgo ran 1 suite in 11m44.274591618s Test Suite Passed