Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621612666 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 21 15:57:48.685: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.688: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 21 15:57:48.712: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 15:57:48.765: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 15:57:48.765: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 21 15:57:48.765: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 21 15:57:48.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 21 15:57:48.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 21 15:57:48.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 21 15:57:48.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 21 15:57:48.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 21 15:57:48.779: INFO: e2e test version: v1.19.11 May 21 15:57:48.780: INFO: kube-apiserver version: v1.19.11 May 21 15:57:48.780: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.786: INFO: Cluster IP family: ipv4 May 21 15:57:48.784: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.805: INFO: Cluster IP family: ipv4 May 21 15:57:48.788: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.809: INFO: Cluster IP family: ipv4 May 21 15:57:48.788: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.809: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 21 15:57:48.799: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.818: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 21 15:57:48.799: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.819: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ May 21 15:57:48.807: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.826: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ May 21 15:57:48.811: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.830: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ May 21 15:57:48.814: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.834: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ May 21 15:57:48.819: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:48.839: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime May 21 15:57:48.836: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.842: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 21 15:57:51.866: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:51.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9957" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 21 15:57:48.850: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.856: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:57:48.863: INFO: Waiting up to 5m0s for pod "busybox-user-65534-bf8ffc45-3fb6-4ed5-aa1a-0fac65250dc8" in namespace "security-context-test-9606" to be "Succeeded or Failed" May 21 15:57:48.865: INFO: Pod "busybox-user-65534-bf8ffc45-3fb6-4ed5-aa1a-0fac65250dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.607509ms May 21 15:57:50.868: INFO: Pod "busybox-user-65534-bf8ffc45-3fb6-4ed5-aa1a-0fac65250dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005081449s May 21 15:57:52.872: INFO: Pod "busybox-user-65534-bf8ffc45-3fb6-4ed5-aa1a-0fac65250dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009369766s May 21 15:57:52.872: INFO: Pod "busybox-user-65534-bf8ffc45-3fb6-4ed5-aa1a-0fac65250dc8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:52.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9606" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 21 15:57:48.971: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.974: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 15:57:48.983: INFO: Waiting up to 5m0s for pod "downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746" in namespace "downward-api-964" to be "Succeeded or Failed" May 21 15:57:48.986: INFO: Pod "downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385696ms May 21 15:57:50.989: INFO: Pod "downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005725683s May 21 15:57:52.993: INFO: Pod "downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009469434s STEP: Saw pod success May 21 15:57:52.993: INFO: Pod "downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746" satisfied condition "Succeeded or Failed" May 21 15:57:52.996: INFO: Trying to get logs from node kali-worker pod downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746 container dapi-container: STEP: delete the pod May 21 15:57:53.024: INFO: Waiting for pod downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746 to disappear May 21 15:57:53.027: INFO: Pod downward-api-4a6a0f68-07bf-40ea-91da-b7453e13e746 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:53.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-964" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":76,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 21 15:57:48.856: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.858: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:57:48.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780" in namespace "projected-6809" to be "Succeeded or Failed" May 21 15:57:48.867: INFO: Pod "downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780": Phase="Pending", Reason="", readiness=false. Elapsed: 1.549067ms May 21 15:57:50.870: INFO: Pod "downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005275231s May 21 15:57:52.874: INFO: Pod "downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009150569s STEP: Saw pod success May 21 15:57:52.874: INFO: Pod "downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780" satisfied condition "Succeeded or Failed" May 21 15:57:52.878: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780 container client-container: STEP: delete the pod May 21 15:57:53.080: INFO: Waiting for pod downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780 to disappear May 21 15:57:53.083: INFO: Pod downwardapi-volume-4104c6dd-853d-47f4-9f99-eb790581f780 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:53.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6809" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 21 15:57:48.861: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.864: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-c93e11d9-e77c-4888-b797-c16a6f091ce3 STEP: Creating a pod to test consume configMaps May 21 15:57:48.873: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19" in namespace "projected-183" to be "Succeeded or Failed" May 21 15:57:48.875: INFO: Pod "pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19": Phase="Pending", Reason="", readiness=false. Elapsed: 1.512051ms May 21 15:57:50.879: INFO: Pod "pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005140014s May 21 15:57:52.882: INFO: Pod "pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008653922s STEP: Saw pod success May 21 15:57:52.882: INFO: Pod "pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19" satisfied condition "Succeeded or Failed" May 21 15:57:52.885: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19 container projected-configmap-volume-test: STEP: delete the pod May 21 15:57:53.281: INFO: Waiting for pod pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19 to disappear May 21 15:57:53.284: INFO: Pod pod-projected-configmaps-2ee24312-24f0-4497-a9a3-71652daefc19 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:53.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-183" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion May 21 15:57:48.884: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.887: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 21 15:57:48.893: INFO: Waiting up to 5m0s for pod "var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69" in namespace "var-expansion-1585" to be "Succeeded or Failed" May 21 15:57:48.895: INFO: Pod "var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69": Phase="Pending", Reason="", readiness=false. Elapsed: 1.670585ms May 21 15:57:50.898: INFO: Pod "var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005274407s May 21 15:57:52.902: INFO: Pod "var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008425863s STEP: Saw pod success May 21 15:57:52.902: INFO: Pod "var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69" satisfied condition "Succeeded or Failed" May 21 15:57:52.904: INFO: Trying to get logs from node kali-worker2 pod var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69 container dapi-container: STEP: delete the pod May 21 15:57:53.681: INFO: Waiting for pod var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69 to disappear May 21 15:57:53.683: INFO: Pod var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:53.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1585" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:52.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:57:52.929: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:53.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4791" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:54.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:54.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1820" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota May 21 15:57:48.871: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.874: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:57:55.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6205" for this suite. • [SLOW TEST:7.050 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test May 21 15:57:48.867: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.870: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8205 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 15:57:48.872: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 15:57:48.886: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:57:50.889: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:57:52.889: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:57:54.889: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:57:56.889: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:57:58.889: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:00.890: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:02.889: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:04.890: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 15:58:04.894: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 15:58:06.921: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.42 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8205 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:06.921: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:07.995: INFO: Found all expected endpoints: [netserver-0] May 21 15:58:07.998: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.35 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8205 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:07.998: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:09.119: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:09.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8205" for this suite. • [SLOW TEST:20.290 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:54.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-742 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-742 STEP: creating replication controller externalsvc in namespace services-742 I0521 15:57:54.257233 21 runners.go:190] Created replication controller with name: externalsvc, namespace: services-742, replica count: 2 I0521 15:57:57.307768 21 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 15:58:00.308091 21 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 21 15:58:00.321: INFO: Creating new exec pod May 21 15:58:02.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-742 exec execpod5fh5s -- /bin/sh -x -c nslookup clusterip-service.services-742.svc.cluster.local' May 21 15:58:02.637: INFO: stderr: "+ nslookup clusterip-service.services-742.svc.cluster.local\n" May 21 15:58:02.637: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-742.svc.cluster.local\tcanonical name = externalsvc.services-742.svc.cluster.local.\nName:\texternalsvc.services-742.svc.cluster.local\nAddress: 10.96.210.55\n\n" STEP: deleting ReplicationController externalsvc in namespace services-742, will wait for the garbage collector to delete the pods May 21 15:58:02.697: INFO: Deleting ReplicationController externalsvc took: 5.739206ms May 21 15:58:03.397: INFO: Terminating ReplicationController externalsvc pods took: 700.238187ms May 21 15:58:10.511: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-742" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:16.320 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":4,"skipped":110,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:10.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 21 15:58:10.554: INFO: Waiting up to 5m0s for pod "pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb" in namespace "emptydir-4056" to be "Succeeded or Failed" May 21 15:58:10.556: INFO: Pod "pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257032ms May 21 15:58:12.561: INFO: Pod "pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006522685s STEP: Saw pod success May 21 15:58:12.561: INFO: Pod "pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb" satisfied condition "Succeeded or Failed" May 21 15:58:12.564: INFO: Trying to get logs from node kali-worker2 pod pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb container test-container: STEP: delete the pod May 21 15:58:12.577: INFO: Waiting for pod pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb to disappear May 21 15:58:12.580: INFO: Pod pod-c0a19a69-0c8c-4004-b1b7-dfc219b321bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:12.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4056" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":111,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:53.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 21 15:57:53.416: INFO: >>> kubeConfig: /root/.kube/config May 21 15:57:57.501: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:13.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9254" for this suite. • [SLOW TEST:20.147 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":2,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:53.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2268 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 15:57:53.122: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 15:57:53.141: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:57:55.146: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:57:57.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:57:59.144: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:01.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:03.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:05.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:07.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:09.144: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:11.145: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:13.144: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 15:58:13.150: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 15:58:15.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostname&protocol=http&host=10.244.1.46&port=8080&tries=1'] Namespace:pod-network-test-2268 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:15.169: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:15.251: INFO: Waiting for responses: map[] May 21 15:58:15.254: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.50:8080/dial?request=hostname&protocol=http&host=10.244.2.41&port=8080&tries=1'] Namespace:pod-network-test-2268 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:15.254: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:15.340: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:15.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2268" for this suite. • [SLOW TEST:22.255 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:53.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:57:53.090: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 21 15:57:58.093: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 21 15:58:02.099: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 21 15:58:04.103: INFO: Creating deployment "test-rollover-deployment" May 21 15:58:04.110: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 21 15:58:06.116: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 21 15:58:06.122: INFO: Ensure that both replica sets have 1 created replica May 21 15:58:06.128: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 21 15:58:06.136: INFO: Updating deployment test-rollover-deployment May 21 15:58:06.136: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 21 15:58:08.142: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 21 15:58:08.148: INFO: Make sure deployment "test-rollover-deployment" is complete May 21 15:58:08.154: INFO: all replica sets need to contain the pod-template-hash label May 21 15:58:08.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209487, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 15:58:10.160: INFO: all replica sets need to contain the pod-template-hash label May 21 15:58:10.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209487, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 15:58:12.161: INFO: all replica sets need to contain the pod-template-hash label May 21 15:58:12.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209487, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 15:58:14.160: INFO: all replica sets need to contain the pod-template-hash label May 21 15:58:14.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209487, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 15:58:16.161: INFO: all replica sets need to contain the pod-template-hash label May 21 15:58:16.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209487, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209484, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 15:58:18.160: INFO: May 21 15:58:18.161: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 21 15:58:18.169: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6467 /apis/apps/v1/namespaces/deployment-6467/deployments/test-rollover-deployment 7f2bdfb7-32a3-4734-90d9-23359a72244b 13458 2 2021-05-21 15:58:04 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-21 15:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 15:58:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00274f3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-21 15:58:04 +0000 UTC,LastTransitionTime:2021-05-21 15:58:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2021-05-21 15:58:17 +0000 UTC,LastTransitionTime:2021-05-21 15:58:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 21 15:58:18.173: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-6467 /apis/apps/v1/namespaces/deployment-6467/replicasets/test-rollover-deployment-5797c7764 85c89e98-92ba-4e78-9ec3-91cf2692570a 13445 2 2021-05-21 15:58:06 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7f2bdfb7-32a3-4734-90d9-23359a72244b 0xc0026dea20 0xc0026dea21}] [] [{kube-controller-manager Update apps/v1 2021-05-21 15:58:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f2bdfb7-32a3-4734-90d9-23359a72244b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026dea98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 21 15:58:18.173: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 21 15:58:18.173: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6467 /apis/apps/v1/namespaces/deployment-6467/replicasets/test-rollover-controller e033e3f3-5b5f-4dd3-bf4f-4d7d430f02d2 13457 2 2021-05-21 15:57:53 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7f2bdfb7-32a3-4734-90d9-23359a72244b 0xc0026de917 0xc0026de918}] [] [{e2e.test Update apps/v1 2021-05-21 15:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 15:58:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f2bdfb7-32a3-4734-90d9-23359a72244b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0026de9b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 15:58:18.174: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-6467 /apis/apps/v1/namespaces/deployment-6467/replicasets/test-rollover-deployment-78bc8b888c 81a4a048-3d25-4590-945d-c9cc3802f378 13142 2 2021-05-21 15:58:04 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7f2bdfb7-32a3-4734-90d9-23359a72244b 0xc0026deb07 0xc0026deb08}] [] [{kube-controller-manager Update apps/v1 2021-05-21 15:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f2bdfb7-32a3-4734-90d9-23359a72244b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026deb98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 15:58:18.177: INFO: Pod "test-rollover-deployment-5797c7764-f2trn" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-f2trn test-rollover-deployment-5797c7764- deployment-6467 /api/v1/namespaces/deployment-6467/pods/test-rollover-deployment-5797c7764-f2trn caa002d1-5b41-4660-9548-cc50c287279d 13165 0 2021-05-21 15:58:06 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.50" ], "mac": "0a:4e:08:6d:f7:a7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.50" ], "mac": "0a:4e:08:6d:f7:a7", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 85c89e98-92ba-4e78-9ec3-91cf2692570a 0xc0026df110 0xc0026df111}] [] [{kube-controller-manager Update v1 2021-05-21 15:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85c89e98-92ba-4e78-9ec3-91cf2692570a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 15:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 15:58:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v5fp8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v5fp8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v5fp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 15:58:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 15:58:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 15:58:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 15:58:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.50,StartTime:2021-05-21 15:58:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 15:58:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://34b097361cd09cef67d35f661dfd51211e24e8083930fa45af8f310a799b6cef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:18.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6467" for this suite. • [SLOW TEST:25.132 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":2,"skipped":85,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:12.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 15:58:13.109: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 15:58:15.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209493, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209493, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209493, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209493, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 15:58:18.127: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration May 21 15:58:18.145: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:18.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6215" for this suite. STEP: Destroying namespace "webhook-6215-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.738 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:15.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 21 15:58:15.405: INFO: Waiting up to 5m0s for pod "pod-76431672-f9c8-4cd7-9832-b23c0bba32bf" in namespace "emptydir-7361" to be "Succeeded or Failed" May 21 15:58:15.409: INFO: Pod "pod-76431672-f9c8-4cd7-9832-b23c0bba32bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384624ms May 21 15:58:17.413: INFO: Pod "pod-76431672-f9c8-4cd7-9832-b23c0bba32bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007030514s May 21 15:58:19.416: INFO: Pod "pod-76431672-f9c8-4cd7-9832-b23c0bba32bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010778616s STEP: Saw pod success May 21 15:58:19.416: INFO: Pod "pod-76431672-f9c8-4cd7-9832-b23c0bba32bf" satisfied condition "Succeeded or Failed" May 21 15:58:19.419: INFO: Trying to get logs from node kali-worker2 pod pod-76431672-f9c8-4cd7-9832-b23c0bba32bf container test-container: STEP: delete the pod May 21 15:58:19.432: INFO: Waiting for pod pod-76431672-f9c8-4cd7-9832-b23c0bba32bf to disappear May 21 15:58:19.435: INFO: Pod pod-76431672-f9c8-4cd7-9832-b23c0bba32bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:19.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7361" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:19.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-7bfb73da-38ce-49b0-8e21-b05c0f0d30f2 STEP: Creating a pod to test consume secrets May 21 15:58:19.488: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad" in namespace "projected-5258" to be "Succeeded or Failed" May 21 15:58:19.490: INFO: Pod "pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440926ms May 21 15:58:21.493: INFO: Pod "pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005599405s May 21 15:58:23.496: INFO: Pod "pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008504431s STEP: Saw pod success May 21 15:58:23.496: INFO: Pod "pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad" satisfied condition "Succeeded or Failed" May 21 15:58:23.499: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad container projected-secret-volume-test: STEP: delete the pod May 21 15:58:23.511: INFO: Waiting for pod pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad to disappear May 21 15:58:23.513: INFO: Pod pod-projected-secrets-4d6e018d-b207-4a25-8421-2431a36d24ad no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:23.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5258" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:18.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 15:58:20.913: INFO: Successfully updated pod "annotationupdate2ccb0b1f-0826-471f-a61c-3f0417b75850" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:24.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8264" for this suite. • [SLOW TEST:6.590 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:24.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:58:25.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8" in namespace "projected-5688" to be "Succeeded or Failed" May 21 15:58:25.005: INFO: Pod "downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107595ms May 21 15:58:27.009: INFO: Pod "downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005648358s STEP: Saw pod success May 21 15:58:27.009: INFO: Pod "downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8" satisfied condition "Succeeded or Failed" May 21 15:58:27.012: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8 container client-container: STEP: delete the pod May 21 15:58:27.027: INFO: Waiting for pod downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8 to disappear May 21 15:58:27.030: INFO: Pod downwardapi-volume-2cf781ca-8749-4336-9396-a2b8b2580ed8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:27.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5688" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":142,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:27.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:58:27.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31" in namespace "projected-1188" to be "Succeeded or Failed" May 21 15:58:27.089: INFO: Pod "downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.444786ms May 21 15:58:29.093: INFO: Pod "downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006131341s STEP: Saw pod success May 21 15:58:29.093: INFO: Pod "downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31" satisfied condition "Succeeded or Failed" May 21 15:58:29.096: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31 container client-container: STEP: delete the pod May 21 15:58:29.110: INFO: Waiting for pod downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31 to disappear May 21 15:58:29.112: INFO: Pod downwardapi-volume-98350ff2-b88e-4f7e-9fe2-a4dc741b1d31 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:29.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1188" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:09.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 21 15:58:09.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 create -f -' May 21 15:58:09.504: INFO: stderr: "" May 21 15:58:09.504: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 21 15:58:09.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:09.627: INFO: stderr: "" May 21 15:58:09.627: INFO: stdout: "update-demo-nautilus-mfbjg update-demo-nautilus-xdp6w " May 21 15:58:09.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-mfbjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:09.744: INFO: stderr: "" May 21 15:58:09.744: INFO: stdout: "" May 21 15:58:09.744: INFO: update-demo-nautilus-mfbjg is created but not running May 21 15:58:14.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:14.878: INFO: stderr: "" May 21 15:58:14.878: INFO: stdout: "update-demo-nautilus-mfbjg update-demo-nautilus-xdp6w " May 21 15:58:14.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-mfbjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:14.998: INFO: stderr: "" May 21 15:58:14.998: INFO: stdout: "true" May 21 15:58:14.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-mfbjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:58:15.123: INFO: stderr: "" May 21 15:58:15.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:58:15.123: INFO: validating pod update-demo-nautilus-mfbjg May 21 15:58:15.127: INFO: got data: { "image": "nautilus.jpg" } May 21 15:58:15.127: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:58:15.127: INFO: update-demo-nautilus-mfbjg is verified up and running May 21 15:58:15.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:15.246: INFO: stderr: "" May 21 15:58:15.246: INFO: stdout: "true" May 21 15:58:15.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:58:15.367: INFO: stderr: "" May 21 15:58:15.367: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:58:15.367: INFO: validating pod update-demo-nautilus-xdp6w May 21 15:58:15.372: INFO: got data: { "image": "nautilus.jpg" } May 21 15:58:15.372: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:58:15.372: INFO: update-demo-nautilus-xdp6w is verified up and running STEP: scaling down the replication controller May 21 15:58:15.375: INFO: scanned /root for discovery docs: May 21 15:58:15.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 21 15:58:16.516: INFO: stderr: "" May 21 15:58:16.516: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 21 15:58:16.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:16.646: INFO: stderr: "" May 21 15:58:16.646: INFO: stdout: "update-demo-nautilus-mfbjg update-demo-nautilus-xdp6w " STEP: Replicas for name=update-demo: expected=1 actual=2 May 21 15:58:21.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:21.768: INFO: stderr: "" May 21 15:58:21.768: INFO: stdout: "update-demo-nautilus-xdp6w " May 21 15:58:21.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:21.892: INFO: stderr: "" May 21 15:58:21.892: INFO: stdout: "true" May 21 15:58:21.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:58:22.003: INFO: stderr: "" May 21 15:58:22.003: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:58:22.003: INFO: validating pod update-demo-nautilus-xdp6w May 21 15:58:22.007: INFO: got data: { "image": "nautilus.jpg" } May 21 15:58:22.007: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:58:22.007: INFO: update-demo-nautilus-xdp6w is verified up and running STEP: scaling up the replication controller May 21 15:58:22.011: INFO: scanned /root for discovery docs: May 21 15:58:22.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 21 15:58:23.149: INFO: stderr: "" May 21 15:58:23.150: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 21 15:58:23.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:23.274: INFO: stderr: "" May 21 15:58:23.274: INFO: stdout: "update-demo-nautilus-djxt7 update-demo-nautilus-xdp6w " May 21 15:58:23.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-djxt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:23.388: INFO: stderr: "" May 21 15:58:23.388: INFO: stdout: "" May 21 15:58:23.388: INFO: update-demo-nautilus-djxt7 is created but not running May 21 15:58:28.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:58:28.516: INFO: stderr: "" May 21 15:58:28.516: INFO: stdout: "update-demo-nautilus-djxt7 update-demo-nautilus-xdp6w " May 21 15:58:28.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-djxt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:28.634: INFO: stderr: "" May 21 15:58:28.634: INFO: stdout: "true" May 21 15:58:28.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-djxt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:58:28.751: INFO: stderr: "" May 21 15:58:28.751: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:58:28.751: INFO: validating pod update-demo-nautilus-djxt7 May 21 15:58:28.756: INFO: got data: { "image": "nautilus.jpg" } May 21 15:58:28.756: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:58:28.756: INFO: update-demo-nautilus-djxt7 is verified up and running May 21 15:58:28.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:58:28.878: INFO: stderr: "" May 21 15:58:28.878: INFO: stdout: "true" May 21 15:58:28.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods update-demo-nautilus-xdp6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:58:28.996: INFO: stderr: "" May 21 15:58:28.996: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:58:28.996: INFO: validating pod update-demo-nautilus-xdp6w May 21 15:58:28.999: INFO: got data: { "image": "nautilus.jpg" } May 21 15:58:28.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:58:28.999: INFO: update-demo-nautilus-xdp6w is verified up and running STEP: using delete to clean up resources May 21 15:58:28.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 delete --grace-period=0 --force -f -' May 21 15:58:29.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:58:29.125: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 21 15:58:29.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get rc,svc -l name=update-demo --no-headers' May 21 15:58:29.251: INFO: stderr: "No resources found in kubectl-6970 namespace.\n" May 21 15:58:29.251: INFO: stdout: "" May 21 15:58:29.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6970 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 15:58:29.381: INFO: stderr: "" May 21 15:58:29.381: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:29.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6970" for this suite. • [SLOW TEST:20.249 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:23.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 15:58:24.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 15:58:26.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209504, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209504, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209504, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209504, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 15:58:29.213: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:58:29.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-46-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:30.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8071" for this suite. STEP: Destroying namespace "webhook-8071-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.820 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:29.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ee8636f0-b8a7-4f55-aa19-7bbccb0efcdb STEP: Creating a pod to test consume secrets May 21 15:58:29.425: INFO: Waiting up to 5m0s for pod "pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941" in namespace "secrets-7368" to be "Succeeded or Failed" May 21 15:58:29.427: INFO: Pod "pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226799ms May 21 15:58:31.431: INFO: Pod "pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006120777s STEP: Saw pod success May 21 15:58:31.431: INFO: Pod "pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941" satisfied condition "Succeeded or Failed" May 21 15:58:31.434: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941 container secret-volume-test: STEP: delete the pod May 21 15:58:31.448: INFO: Waiting for pod pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941 to disappear May 21 15:58:31.451: INFO: Pod pod-secrets-90e0ad38-dfa2-4d4b-9148-31b678e71941 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:31.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7368" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:30.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:58:30.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f" in namespace "downward-api-6424" to be "Succeeded or Failed" May 21 15:58:30.417: INFO: Pod "downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398056ms May 21 15:58:32.422: INFO: Pod "downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006628481s STEP: Saw pod success May 21 15:58:32.422: INFO: Pod "downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f" satisfied condition "Succeeded or Failed" May 21 15:58:32.425: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f container client-container: STEP: delete the pod May 21 15:58:32.439: INFO: Waiting for pod downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f to disappear May 21 15:58:32.442: INFO: Pod downwardapi-volume-030aaea1-049b-4066-8e1d-0a784a79c39f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:32.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6424" for this suite. • ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:51.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 21 15:57:55.953: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5107 PodName:var-expansion-3933379b-f201-44a6-8a9e-012cadd2b23b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:57:55.953: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 21 15:57:56.356: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5107 PodName:var-expansion-3933379b-f201-44a6-8a9e-012cadd2b23b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:57:56.356: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 21 15:57:56.930: INFO: Successfully updated pod "var-expansion-3933379b-f201-44a6-8a9e-012cadd2b23b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 21 15:57:56.932: INFO: Deleting pod "var-expansion-3933379b-f201-44a6-8a9e-012cadd2b23b" in namespace "var-expansion-5107" May 21 15:57:56.935: INFO: Wait up to 5m0s for pod "var-expansion-3933379b-f201-44a6-8a9e-012cadd2b23b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5107" for this suite. • [SLOW TEST:41.038 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:32.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:58:32.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7" in namespace "projected-7053" to be "Succeeded or Failed" May 21 15:58:32.485: INFO: Pod "downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.796841ms May 21 15:58:34.488: INFO: Pod "downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005226785s May 21 15:58:36.491: INFO: Pod "downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008269483s STEP: Saw pod success May 21 15:58:36.491: INFO: Pod "downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7" satisfied condition "Succeeded or Failed" May 21 15:58:36.494: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7 container client-container: STEP: delete the pod May 21 15:58:36.504: INFO: Waiting for pod downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7 to disappear May 21 15:58:36.507: INFO: Pod downwardapi-volume-d0e1c8bc-b9a6-4b82-8a2b-de0084d6ceb7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:36.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7053" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:32.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 21 15:58:32.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2477 create -f -' May 21 15:58:33.319: INFO: stderr: "" May 21 15:58:33.319: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 21 15:58:34.323: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:34.323: INFO: Found 0 / 1 May 21 15:58:35.323: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:35.323: INFO: Found 0 / 1 May 21 15:58:36.323: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:36.323: INFO: Found 0 / 1 May 21 15:58:37.323: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:37.323: INFO: Found 1 / 1 May 21 15:58:37.323: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 21 15:58:37.325: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:37.326: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 21 15:58:37.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2477 patch pod agnhost-primary-nc846 -p {"metadata":{"annotations":{"x":"y"}}}' May 21 15:58:37.450: INFO: stderr: "" May 21 15:58:37.450: INFO: stdout: "pod/agnhost-primary-nc846 patched\n" STEP: checking annotations May 21 15:58:37.454: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:58:37.454: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:37.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2477" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:37.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:37.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9764" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:18.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2258 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 15:58:18.227: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 15:58:18.247: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:58:20.251: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:22.252: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:24.250: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:26.251: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:28.251: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:30.250: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:32.250: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 15:58:32.255: INFO: The status of Pod netserver-1 is Running (Ready = false) May 21 15:58:34.259: INFO: The status of Pod netserver-1 is Running (Ready = false) May 21 15:58:36.259: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 15:58:40.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.53:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2258 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:40.290: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:40.421: INFO: Found all expected endpoints: [netserver-0] May 21 15:58:40.425: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.53:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2258 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:40.425: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:40.508: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2258" for this suite. • [SLOW TEST:22.316 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":93,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:37.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-30d59030-c7db-4606-8ea8-81051263bdde STEP: Creating a pod to test consume secrets May 21 15:58:37.605: INFO: Waiting up to 5m0s for pod "pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0" in namespace "secrets-6734" to be "Succeeded or Failed" May 21 15:58:37.607: INFO: Pod "pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112852ms May 21 15:58:39.610: INFO: Pod "pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005524644s May 21 15:58:41.613: INFO: Pod "pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00813164s STEP: Saw pod success May 21 15:58:41.613: INFO: Pod "pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0" satisfied condition "Succeeded or Failed" May 21 15:58:41.616: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0 container secret-volume-test: STEP: delete the pod May 21 15:58:41.628: INFO: Waiting for pod pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0 to disappear May 21 15:58:41.630: INFO: Pod pod-secrets-7667d7ca-237a-4d2e-ab8c-24b5fd1d2ec0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:41.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6734" for this suite. STEP: Destroying namespace "secret-namespace-6776" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":50,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:40.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 21 15:58:40.555: INFO: Waiting up to 5m0s for pod "pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1" in namespace "emptydir-7314" to be "Succeeded or Failed" May 21 15:58:40.558: INFO: Pod "pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37479ms May 21 15:58:42.560: INFO: Pod "pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00463471s STEP: Saw pod success May 21 15:58:42.560: INFO: Pod "pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1" satisfied condition "Succeeded or Failed" May 21 15:58:42.563: INFO: Trying to get logs from node kali-worker pod pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1 container test-container: STEP: delete the pod May 21 15:58:42.573: INFO: Waiting for pod pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1 to disappear May 21 15:58:42.575: INFO: Pod pod-a676beaa-3835-4e33-b6fe-89c4d5fa65d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:42.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7314" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":95,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:41.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:47.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1090" for this suite. • [SLOW TEST:6.046 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:47.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-715252a6-61c1-41f2-9e5e-78253d208e28 STEP: Creating a pod to test consume secrets May 21 15:58:47.745: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a" in namespace "projected-7966" to be "Succeeded or Failed" May 21 15:58:47.747: INFO: Pod "pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212438ms May 21 15:58:49.751: INFO: Pod "pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005995072s STEP: Saw pod success May 21 15:58:49.751: INFO: Pod "pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a" satisfied condition "Succeeded or Failed" May 21 15:58:49.753: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a container secret-volume-test: STEP: delete the pod May 21 15:58:49.766: INFO: Waiting for pod pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a to disappear May 21 15:58:49.769: INFO: Pod pod-projected-secrets-fbd62403-37a6-406c-9991-c8fbcb24ed0a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:49.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7966" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:29.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4792 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 15:58:29.182: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 15:58:29.199: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 15:58:31.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:33.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:35.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:37.203: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:39.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:41.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:43.203: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:45.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:47.202: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:49.203: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 15:58:51.203: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 15:58:51.209: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 15:58:53.234: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=udp&host=10.244.1.57&port=8081&tries=1'] Namespace:pod-network-test-4792 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:53.235: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:53.354: INFO: Waiting for responses: map[] May 21 15:58:53.358: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:8080/dial?request=hostname&protocol=udp&host=10.244.2.57&port=8081&tries=1'] Namespace:pod-network-test-4792 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 15:58:53.358: INFO: >>> kubeConfig: /root/.kube/config May 21 15:58:53.481: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4792" for this suite. • [SLOW TEST:24.336 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":171,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:42.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:53.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4108" for this suite. • [SLOW TEST:11.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":5,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:53.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:58:53.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373" in namespace "downward-api-293" to be "Succeeded or Failed" May 21 15:58:53.717: INFO: Pod "downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550383ms May 21 15:58:55.720: INFO: Pod "downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005830433s STEP: Saw pod success May 21 15:58:55.720: INFO: Pod "downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373" satisfied condition "Succeeded or Failed" May 21 15:58:55.723: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373 container client-container: STEP: delete the pod May 21 15:58:55.736: INFO: Waiting for pod downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373 to disappear May 21 15:58:55.738: INFO: Pod downwardapi-volume-ed1dc898-f83a-4ef9-8391-d6fbf3d93373 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-293" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":115,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:53.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:58:53.523: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 21 15:58:55.547: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:56.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-518" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":11,"skipped":173,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:56.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 21 15:58:57.912: INFO: starting watch STEP: patching STEP: updating May 21 15:58:57.921: INFO: waiting for watch events with expected annotations May 21 15:58:57.921: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:58:57.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-9467" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":12,"skipped":182,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:55.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7180 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7180 I0521 15:58:55.799709 20 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7180, replica count: 2 I0521 15:58:58.850184 20 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 15:58:58.850: INFO: Creating new exec pod May 21 15:59:01.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7180 exec execpodnnmzw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 21 15:59:02.076: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 21 15:59:02.076: INFO: stdout: "" May 21 15:59:02.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7180 exec execpodnnmzw -- /bin/sh -x -c nc -zv -t -w 2 10.96.199.44 80' May 21 15:59:02.283: INFO: stderr: "+ nc -zv -t -w 2 10.96.199.44 80\nConnection to 10.96.199.44 80 port [tcp/http] succeeded!\n" May 21 15:59:02.283: INFO: stdout: "" May 21 15:59:02.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7180 exec execpodnnmzw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.2 31283' May 21 15:59:02.512: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.2 31283\nConnection to 172.18.0.2 31283 port [tcp/31283] succeeded!\n" May 21 15:59:02.512: INFO: stdout: "" May 21 15:59:02.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7180 exec execpodnnmzw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31283' May 21 15:59:02.756: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31283\nConnection to 172.18.0.4 31283 port [tcp/31283] succeeded!\n" May 21 15:59:02.757: INFO: stdout: "" May 21 15:59:02.757: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:02.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7180" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:7.026 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:57.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:58:58.043: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"50219b2a-6446-4826-8372-8873f1c1935c", Controller:(*bool)(0xc003dcf04e), BlockOwnerDeletion:(*bool)(0xc003dcf04f)}} May 21 15:58:58.047: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"de8bef19-1b7a-410a-affa-e3c9357a8646", Controller:(*bool)(0xc00374b866), BlockOwnerDeletion:(*bool)(0xc00374b867)}} May 21 15:58:58.051: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"752d275a-9d07-4b44-a97f-a23bfa6f1383", Controller:(*bool)(0xc003dcf236), BlockOwnerDeletion:(*bool)(0xc003dcf237)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:03.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7368" for this suite. • [SLOW TEST:5.070 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 21 15:57:48.871: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.874: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-08d95216-08ef-4729-8486-d49c9135a77e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-08d95216-08ef-4729-8486-d49c9135a77e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:07.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2323" for this suite. • [SLOW TEST:78.897 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:02.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 21 15:59:02.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 create -f -' May 21 15:59:03.092: INFO: stderr: "" May 21 15:59:03.092: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 21 15:59:03.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:59:03.234: INFO: stderr: "" May 21 15:59:03.234: INFO: stdout: "update-demo-nautilus-gb926 update-demo-nautilus-z59j5 " May 21 15:59:03.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods update-demo-nautilus-gb926 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:59:03.346: INFO: stderr: "" May 21 15:59:03.346: INFO: stdout: "" May 21 15:59:03.346: INFO: update-demo-nautilus-gb926 is created but not running May 21 15:59:08.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 21 15:59:08.472: INFO: stderr: "" May 21 15:59:08.472: INFO: stdout: "update-demo-nautilus-gb926 update-demo-nautilus-z59j5 " May 21 15:59:08.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods update-demo-nautilus-gb926 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:59:08.602: INFO: stderr: "" May 21 15:59:08.602: INFO: stdout: "true" May 21 15:59:08.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods update-demo-nautilus-gb926 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:59:08.724: INFO: stderr: "" May 21 15:59:08.724: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:59:08.724: INFO: validating pod update-demo-nautilus-gb926 May 21 15:59:08.728: INFO: got data: { "image": "nautilus.jpg" } May 21 15:59:08.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:59:08.729: INFO: update-demo-nautilus-gb926 is verified up and running May 21 15:59:08.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods update-demo-nautilus-z59j5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 21 15:59:08.856: INFO: stderr: "" May 21 15:59:08.856: INFO: stdout: "true" May 21 15:59:08.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods update-demo-nautilus-z59j5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 21 15:59:08.981: INFO: stderr: "" May 21 15:59:08.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 21 15:59:08.981: INFO: validating pod update-demo-nautilus-z59j5 May 21 15:59:08.985: INFO: got data: { "image": "nautilus.jpg" } May 21 15:59:08.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 21 15:59:08.985: INFO: update-demo-nautilus-z59j5 is verified up and running STEP: using delete to clean up resources May 21 15:59:08.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 delete --grace-period=0 --force -f -' May 21 15:59:09.107: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:09.107: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 21 15:59:09.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get rc,svc -l name=update-demo --no-headers' May 21 15:59:09.236: INFO: stderr: "No resources found in kubectl-433 namespace.\n" May 21 15:59:09.237: INFO: stdout: "" May 21 15:59:09.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-433 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 15:59:09.369: INFO: stderr: "" May 21 15:59:09.369: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:09.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-433" for this suite. • [SLOW TEST:6.572 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":8,"skipped":136,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:36.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3721 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3721 STEP: Creating statefulset with conflicting port in namespace statefulset-3721 STEP: Waiting until pod test-pod will start running in namespace statefulset-3721 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3721 May 21 15:58:40.625: INFO: Observed stateful pod in namespace: statefulset-3721, name: ss-0, uid: 6cf8f711-a3a1-45eb-bce4-fe8fcdfa2b91, status phase: Pending. Waiting for statefulset controller to delete. May 21 15:58:41.333: INFO: Observed stateful pod in namespace: statefulset-3721, name: ss-0, uid: 6cf8f711-a3a1-45eb-bce4-fe8fcdfa2b91, status phase: Failed. Waiting for statefulset controller to delete. May 21 15:58:41.340: INFO: Observed stateful pod in namespace: statefulset-3721, name: ss-0, uid: 6cf8f711-a3a1-45eb-bce4-fe8fcdfa2b91, status phase: Failed. Waiting for statefulset controller to delete. May 21 15:58:41.343: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3721 STEP: Removing pod with conflicting port in namespace statefulset-3721 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3721 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 15:58:49.367: INFO: Deleting all statefulset in ns statefulset-3721 May 21 15:58:49.370: INFO: Scaling statefulset ss to 0 May 21 15:59:09.385: INFO: Waiting for statefulset status.replicas updated to 0 May 21 15:59:09.388: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:09.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3721" for this suite. • [SLOW TEST:32.845 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":8,"skipped":93,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:07.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:09.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9479" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:09.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-3e2ae8e3-d3c4-4e35-8f40-8ef626921b4f STEP: Creating a pod to test consume configMaps May 21 15:59:09.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298" in namespace "projected-8917" to be "Succeeded or Failed" May 21 15:59:09.434: INFO: Pod "pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131653ms May 21 15:59:11.437: INFO: Pod "pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005847186s STEP: Saw pod success May 21 15:59:11.438: INFO: Pod "pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298" satisfied condition "Succeeded or Failed" May 21 15:59:11.441: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298 container projected-configmap-volume-test: STEP: delete the pod May 21 15:59:11.455: INFO: Waiting for pod pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298 to disappear May 21 15:59:11.458: INFO: Pod pod-projected-configmaps-1a1e3394-47f3-4e51-b531-62792679f298 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:11.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8917" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":144,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:09.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:09.456: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-33479686-3140-4f89-83f8-6b314402a741" in namespace "security-context-test-2503" to be "Succeeded or Failed" May 21 15:59:09.459: INFO: Pod "busybox-privileged-false-33479686-3140-4f89-83f8-6b314402a741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481404ms May 21 15:59:11.462: INFO: Pod "busybox-privileged-false-33479686-3140-4f89-83f8-6b314402a741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00569764s May 21 15:59:11.462: INFO: Pod "busybox-privileged-false-33479686-3140-4f89-83f8-6b314402a741" satisfied condition "Succeeded or Failed" May 21 15:59:11.468: INFO: Got logs for pod "busybox-privileged-false-33479686-3140-4f89-83f8-6b314402a741": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:11.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2503" for this suite. •SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:11.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-29aa2557-a3d7-4c5c-a8b8-2175638cc163 STEP: Creating a pod to test consume secrets May 21 15:59:11.520: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9" in namespace "projected-7457" to be "Succeeded or Failed" May 21 15:59:11.523: INFO: Pod "pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806376ms May 21 15:59:13.527: INFO: Pod "pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00672361s May 21 15:59:15.530: INFO: Pod "pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010509882s STEP: Saw pod success May 21 15:59:15.530: INFO: Pod "pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9" satisfied condition "Succeeded or Failed" May 21 15:59:15.533: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9 container projected-secret-volume-test: STEP: delete the pod May 21 15:59:15.546: INFO: Waiting for pod pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9 to disappear May 21 15:59:15.548: INFO: Pod pod-projected-secrets-b0c90648-6858-469a-b7c2-f9eebdda09a9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:15.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7457" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:15.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:15.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9161" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":11,"skipped":181,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":13,"skipped":198,"failed":0} [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:03.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-sgjpk in namespace proxy-4504 I0521 15:59:03.103841 21 runners.go:190] Created replication controller with name: proxy-service-sgjpk, namespace: proxy-4504, replica count: 1 I0521 15:59:04.154282 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 15:59:05.154565 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 15:59:06.155002 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 15:59:07.155330 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 15:59:08.155564 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 15:59:09.155809 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 15:59:10.156140 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 15:59:11.156480 21 runners.go:190] proxy-service-sgjpk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 15:59:11.160: INFO: setup took 8.067068002s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 14.755296ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 15.079965ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 15.127638ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 14.656553ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 14.767214ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 15.032935ms) May 21 15:59:11.175: INFO: (0) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 15.05437ms) May 21 15:59:11.176: INFO: (0) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 15.904163ms) May 21 15:59:11.177: INFO: (0) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 15.95289ms) May 21 15:59:11.178: INFO: (0) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 17.403493ms) May 21 15:59:11.178: INFO: (0) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 17.952356ms) May 21 15:59:11.179: INFO: (0) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 18.536676ms) May 21 15:59:11.179: INFO: (0) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 18.59825ms) May 21 15:59:11.182: INFO: (0) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 21.914799ms) May 21 15:59:11.182: INFO: (0) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 21.900557ms) May 21 15:59:11.183: INFO: (0) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 4.52838ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.649629ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.662771ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.683156ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 5.004531ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 4.966097ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 5.030666ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 5.240426ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 5.353413ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 5.227362ms) May 21 15:59:11.188: INFO: (1) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test (200; 3.515764ms) May 21 15:59:11.192: INFO: (2) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 3.777342ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 4.027512ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.198133ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 4.588655ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.457422ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.680412ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.658051ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.613758ms) May 21 15:59:11.193: INFO: (2) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.677039ms) May 21 15:59:11.197: INFO: (3) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 3.919478ms) May 21 15:59:11.197: INFO: (3) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.108801ms) May 21 15:59:11.197: INFO: (3) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 4.044054ms) May 21 15:59:11.197: INFO: (3) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.085839ms) May 21 15:59:11.198: INFO: (3) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 4.384159ms) May 21 15:59:11.198: INFO: (3) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.388648ms) May 21 15:59:11.198: INFO: (3) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.345839ms) May 21 15:59:11.198: INFO: (3) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 4.79661ms) May 21 15:59:11.198: INFO: (3) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.910829ms) May 21 15:59:11.202: INFO: (4) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.538539ms) May 21 15:59:11.202: INFO: (4) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.011814ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.118956ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.27029ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.231694ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.398851ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 4.734469ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 4.775879ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.929811ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.861041ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.904845ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 4.915502ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.902371ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 5.032733ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.892658ms) May 21 15:59:11.203: INFO: (4) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: ... (200; 3.515504ms) May 21 15:59:11.207: INFO: (5) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 3.566267ms) May 21 15:59:11.207: INFO: (5) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.735693ms) May 21 15:59:11.207: INFO: (5) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.898275ms) May 21 15:59:11.207: INFO: (5) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.943885ms) May 21 15:59:11.208: INFO: (5) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.061562ms) May 21 15:59:11.208: INFO: (5) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 5.063881ms) May 21 15:59:11.209: INFO: (5) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 5.147364ms) May 21 15:59:11.209: INFO: (5) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 5.327527ms) May 21 15:59:11.213: INFO: (6) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 4.392727ms) May 21 15:59:11.213: INFO: (6) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.494871ms) May 21 15:59:11.213: INFO: (6) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.504164ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.826932ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.92904ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 4.829066ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.841564ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.994444ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: ... (200; 5.032581ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 5.234374ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 5.26151ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 5.234682ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 5.22215ms) May 21 15:59:11.214: INFO: (6) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 5.171178ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.839213ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 3.879367ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 3.93458ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 3.980135ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 4.010928ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.073552ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.92447ms) May 21 15:59:11.218: INFO: (7) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test (200; 3.592947ms) May 21 15:59:11.226: INFO: (8) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.082262ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 4.368705ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.630616ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 4.978886ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.938063ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 5.039489ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 5.166181ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 5.201968ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 5.254363ms) May 21 15:59:11.227: INFO: (8) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 5.249461ms) May 21 15:59:11.228: INFO: (8) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 5.648174ms) May 21 15:59:11.228: INFO: (8) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 5.777648ms) May 21 15:59:11.228: INFO: (8) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 5.734025ms) May 21 15:59:11.228: INFO: (8) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 6.184771ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 3.085009ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 3.257789ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 3.326213ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.351861ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.913278ms) May 21 15:59:11.232: INFO: (9) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.959845ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.491637ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 4.512749ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.627868ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.57094ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.72636ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.686993ms) May 21 15:59:11.233: INFO: (9) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 4.881728ms) May 21 15:59:11.234: INFO: (9) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: ... (200; 4.680181ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.83482ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.906079ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 4.951993ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 5.133203ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 5.017556ms) May 21 15:59:11.239: INFO: (10) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 5.072656ms) May 21 15:59:11.246: INFO: (11) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 6.801965ms) May 21 15:59:11.246: INFO: (11) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 6.782087ms) May 21 15:59:11.246: INFO: (11) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 6.852901ms) May 21 15:59:11.246: INFO: (11) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 6.884107ms) May 21 15:59:11.248: INFO: (11) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 8.679296ms) May 21 15:59:11.248: INFO: (11) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 8.617406ms) May 21 15:59:11.249: INFO: (11) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 9.592453ms) May 21 15:59:11.249: INFO: (11) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 9.620467ms) May 21 15:59:11.249: INFO: (11) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 9.734124ms) May 21 15:59:11.249: INFO: (11) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 9.739173ms) May 21 15:59:11.249: INFO: (11) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 9.893629ms) May 21 15:59:11.250: INFO: (11) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 10.536428ms) May 21 15:59:11.250: INFO: (11) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 10.436395ms) May 21 15:59:11.250: INFO: (11) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 10.900058ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 12.036184ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 11.731687ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 12.681815ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 11.259061ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 12.230505ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 10.876738ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 12.047801ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 12.634ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 11.445418ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 11.904177ms) May 21 15:59:11.263: INFO: (12) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 12.025169ms) May 21 15:59:11.264: INFO: (12) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 12.833281ms) May 21 15:59:11.264: INFO: (12) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 12.222676ms) May 21 15:59:11.264: INFO: (12) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 13.402905ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 3.371609ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 3.429846ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 3.480218ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 3.392606ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.496831ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 3.618519ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.59865ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.668588ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.609765ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 3.668802ms) May 21 15:59:11.267: INFO: (13) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 3.695202ms) May 21 15:59:11.268: INFO: (13) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 3.964534ms) May 21 15:59:11.270: INFO: (14) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 2.784766ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 2.988065ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 3.060099ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 3.021131ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.203585ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: ... (200; 3.680945ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.653998ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.56933ms) May 21 15:59:11.271: INFO: (14) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 3.673931ms) May 21 15:59:11.274: INFO: (15) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 2.893918ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 3.272639ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.31433ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 3.345561ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 3.460392ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 3.492134ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 3.539202ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.599145ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 3.680473ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 3.755034ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 3.709232ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 3.8034ms) May 21 15:59:11.275: INFO: (15) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 3.753609ms) May 21 15:59:11.276: INFO: (15) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.043781ms) May 21 15:59:11.278: INFO: (16) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 2.770109ms) May 21 15:59:11.279: INFO: (16) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 2.896901ms) May 21 15:59:11.279: INFO: (16) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 2.983833ms) May 21 15:59:11.279: INFO: (16) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 3.165506ms) May 21 15:59:11.279: INFO: (16) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 3.573175ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.806445ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 3.996624ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.061984ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.059525ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 4.215787ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.090719ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 4.132655ms) May 21 15:59:11.280: INFO: (16) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test (200; 3.743086ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 3.816171ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 3.896708ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 4.138185ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.138013ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.147397ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.203586ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 4.182255ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.253929ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.381285ms) May 21 15:59:11.284: INFO: (17) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.374987ms) May 21 15:59:11.288: INFO: (18) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 3.384254ms) May 21 15:59:11.288: INFO: (18) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname2/proxy/: tls qux (200; 3.408335ms) May 21 15:59:11.288: INFO: (18) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 3.436814ms) May 21 15:59:11.288: INFO: (18) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 3.946607ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname2/proxy/: bar (200; 3.986294ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 4.085795ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 4.001831ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:1080/proxy/: test<... (200; 4.293072ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.204329ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 4.291087ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 4.272963ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 4.406263ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/services/https:proxy-service-sgjpk:tlsportname1/proxy/: tls baz (200; 4.386994ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:462/proxy/: tls qux (200; 4.408417ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 4.626304ms) May 21 15:59:11.289: INFO: (18) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:443/proxy/: test<... (200; 10.161667ms) May 21 15:59:11.299: INFO: (19) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 10.296842ms) May 21 15:59:11.299: INFO: (19) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:162/proxy/: bar (200; 10.149988ms) May 21 15:59:11.299: INFO: (19) /api/v1/namespaces/proxy-4504/pods/https:proxy-service-sgjpk-ttd72:460/proxy/: tls baz (200; 10.206449ms) May 21 15:59:11.303: INFO: (19) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname1/proxy/: foo (200; 14.240174ms) May 21 15:59:11.304: INFO: (19) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:1080/proxy/: ... (200; 14.464879ms) May 21 15:59:11.312: INFO: (19) /api/v1/namespaces/proxy-4504/services/http:proxy-service-sgjpk:portname1/proxy/: foo (200; 22.415587ms) May 21 15:59:11.312: INFO: (19) /api/v1/namespaces/proxy-4504/pods/proxy-service-sgjpk-ttd72/proxy/: test (200; 22.41665ms) May 21 15:59:11.312: INFO: (19) /api/v1/namespaces/proxy-4504/services/proxy-service-sgjpk:portname2/proxy/: bar (200; 22.4722ms) May 21 15:59:11.312: INFO: (19) /api/v1/namespaces/proxy-4504/pods/http:proxy-service-sgjpk-ttd72:160/proxy/: foo (200; 22.619336ms) STEP: deleting ReplicationController proxy-service-sgjpk in namespace proxy-4504, will wait for the garbage collector to delete the pods May 21 15:59:11.375: INFO: Deleting ReplicationController proxy-service-sgjpk took: 6.606244ms May 21 15:59:11.475: INFO: Terminating ReplicationController proxy-service-sgjpk pods took: 100.223471ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:15.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4504" for this suite. • [SLOW TEST:12.617 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":14,"skipped":198,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:15.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:15.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9785" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":12,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:09.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:09.931: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1202" for this suite. • [SLOW TEST:6.244 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":3,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:55.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-c9ba08b7-1e76-444e-96af-402d985b26a9 STEP: Creating secret with name s-test-opt-upd-b4789789-f8d4-466c-a5d1-75fd7ffb0c95 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c9ba08b7-1e76-444e-96af-402d985b26a9 STEP: Updating secret s-test-opt-upd-b4789789-f8d4-466c-a5d1-75fd7ffb0c95 STEP: Creating secret with name s-test-opt-create-981f6452-bd4c-4227-ade3-ebadc5fb9f15 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:18.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2599" for this suite. • [SLOW TEST:82.401 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:15.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:19.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7017" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":203,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:15.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:59:15.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9" in namespace "downward-api-6556" to be "Succeeded or Failed" May 21 15:59:15.833: INFO: Pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152993ms May 21 15:59:17.837: INFO: Pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005509119s May 21 15:59:19.840: INFO: Pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009164895s May 21 15:59:21.845: INFO: Pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013875432s STEP: Saw pod success May 21 15:59:21.845: INFO: Pod "downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9" satisfied condition "Succeeded or Failed" May 21 15:59:21.849: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9 container client-container: STEP: delete the pod May 21 15:59:21.862: INFO: Waiting for pod downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9 to disappear May 21 15:59:21.864: INFO: Pod downwardapi-volume-0c7c5f53-b874-4b2c-ba62-930146d85fc9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:21.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6556" for this suite. • [SLOW TEST:6.066 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":238,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:16.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 21 15:59:22.766: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9376 pod-service-account-fcad64f0-1e66-4dce-8466-046625e5d16d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 21 15:59:22.982: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9376 pod-service-account-fcad64f0-1e66-4dce-8466-046625e5d16d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 21 15:59:23.244: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9376 pod-service-account-fcad64f0-1e66-4dce-8466-046625e5d16d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:23.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9376" for this suite. • [SLOW TEST:7.284 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":4,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:21.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:21.918: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:23.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2183" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":245,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":59,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:18.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 15:59:22.897: INFO: Successfully updated pod "labelsupdate03c93161-c80b-4ddf-b64f-e791752f8736" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:24.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1984" for this suite. • [SLOW TEST:6.581 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:23.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 15:59:23.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426" in namespace "projected-7611" to be "Succeeded or Failed" May 21 15:59:23.581: INFO: Pod "downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.569283ms May 21 15:59:25.584: INFO: Pod "downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005855446s STEP: Saw pod success May 21 15:59:25.585: INFO: Pod "downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426" satisfied condition "Succeeded or Failed" May 21 15:59:25.587: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426 container client-container: STEP: delete the pod May 21 15:59:25.601: INFO: Waiting for pod downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426 to disappear May 21 15:59:25.603: INFO: Pod downwardapi-volume-1fbb9040-31c4-4db3-9e7f-b0173ae10426 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:25.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7611" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":141,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:19.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 21 15:59:24.814: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:25.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7757" for this suite. • [SLOW TEST:6.080 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":16,"skipped":206,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:25.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:25.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 create -f -' May 21 15:59:25.988: INFO: stderr: "" May 21 15:59:25.988: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 21 15:59:25.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 create -f -' May 21 15:59:26.263: INFO: stderr: "" May 21 15:59:26.263: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 21 15:59:27.267: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:59:27.267: INFO: Found 0 / 1 May 21 15:59:28.266: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:59:28.267: INFO: Found 1 / 1 May 21 15:59:28.267: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 21 15:59:28.269: INFO: Selector matched 1 pods for map[app:agnhost] May 21 15:59:28.269: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 21 15:59:28.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 describe pod agnhost-primary-5pxll' May 21 15:59:28.413: INFO: stderr: "" May 21 15:59:28.413: INFO: stdout: "Name: agnhost-primary-5pxll\nNamespace: kubectl-4738\nPriority: 0\nNode: kali-worker/172.18.0.2\nStart Time: Fri, 21 May 2021 15:59:25 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.77\"\n ],\n \"mac\": \"ae:06:8b:04:f2:e3\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.77\"\n ],\n \"mac\": \"ae:06:8b:04:f2:e3\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.244.1.77\nIPs:\n IP: 10.244.1.77\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://d5c5a6a530781cc9d15a3ae60e0a85deaf7925b18397d0938ec570fa0da28bc0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 21 May 2021 15:59:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4w54r (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4w54r:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4w54r\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4738/agnhost-primary-5pxll to kali-worker\n Normal AddedInterface 2s multus Add eth0 [10.244.1.77/24]\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" May 21 15:59:28.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 describe rc agnhost-primary' May 21 15:59:28.564: INFO: stderr: "" May 21 15:59:28.564: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4738\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-5pxll\n" May 21 15:59:28.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 describe service agnhost-primary' May 21 15:59:28.686: INFO: stderr: "" May 21 15:59:28.686: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4738\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.96.103.192\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.77:6379\nSession Affinity: None\nEvents: \n" May 21 15:59:28.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 describe node kali-control-plane' May 21 15:59:28.868: INFO: stderr: "" May 21 15:59:28.868: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n ingress-ready=true\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 21 May 2021 15:13:14 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Fri, 21 May 2021 15:59:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 21 May 2021 15:57:00 +0000 Fri, 21 May 2021 15:13:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 21 May 2021 15:57:00 +0000 Fri, 21 May 2021 15:13:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 21 May 2021 15:57:00 +0000 Fri, 21 May 2021 15:13:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 21 May 2021 15:57:00 +0000 Fri, 21 May 2021 15:13:49 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.3\n Hostname: kali-control-plane\nCapacity:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nAllocatable:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nSystem Info:\n Machine ID: 34385849da584383988461f411f72b36\n System UUID: 8eda95c3-4d8b-4712-a84e-b5a53507e203\n Boot ID: 8e840902-9ac1-4acc-b00a-3731226c7bea\n Kernel Version: 5.4.0-73-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.1\n Kubelet Version: v1.19.11\n Kube-Proxy Version: v1.19.11\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/kali/kali-control-plane\nNon-terminated Pods: (14 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-mpnsm 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 45m\n kube-system coredns-f9fd979d6-nfqfd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 45m\n kube-system create-loop-devs-cwbn4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kindnet-7b2zs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 45m\n kube-system kube-apiserver-kali-control-plane 250m (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-controller-manager-kali-control-plane 200m (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-multus-ds-xtw9p 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 43m\n kube-system kube-proxy-c6n8g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n kube-system tune-sysctls-zzq45 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m\n local-path-storage local-path-provisioner-547f784dff-s88mx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45m\n metallb-system speaker-jlmfn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m\n projectcontour envoy-788lx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (1%) 200m (0%)\n memory 240Mi (0%) 440Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning SystemOOM 46m (x7 over 46m) kubelet (combined from similar events): System OOM encountered, victim process: iptables, pid: 3384209\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 3015608\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 2986947\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 2990225\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 3006037\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 3013270\n Normal Starting 45m kubelet Starting kubelet.\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 3021517\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: kindnetd, pid: 2560255\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: iptables, pid: 3195935\n Warning SystemOOM 45m kubelet System OOM encountered, victim process: kindnetd, pid: 2553185\n Warning SystemOOM 45m (x15 over 45m) kubelet (combined from similar events): System OOM encountered, victim process: kindnetd, pid: 3300553\n Normal Starting 45m kube-proxy Starting kube-proxy.\n" May 21 15:59:28.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4738 describe namespace kubectl-4738' May 21 15:59:28.997: INFO: stderr: "" May 21 15:59:28.997: INFO: stdout: "Name: kubectl-4738\nLabels: e2e-framework=kubectl\n e2e-run=4e8893a7-b5ea-4285-99ba-9cc2cf9bca52\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:28.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4738" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":6,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:29.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 21 15:59:29.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8426 api-versions' May 21 15:59:29.231: INFO: stderr: "" May 21 15:59:29.231: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-unknown-in-nested.example.com/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nk8s.cni.cncf.io/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nprojectcontour.io/v1\nprojectcontour.io/v1alpha1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:29.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8426" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":7,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:29.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-eaeb92ec-4ef0-4f04-a84a-cb711a58fa18 STEP: Creating a pod to test consume configMaps May 21 15:59:29.328: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c" in namespace "projected-9382" to be "Succeeded or Failed" May 21 15:59:29.330: INFO: Pod "pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649488ms May 21 15:59:31.334: INFO: Pod "pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006052896s May 21 15:59:33.339: INFO: Pod "pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010707218s STEP: Saw pod success May 21 15:59:33.339: INFO: Pod "pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c" satisfied condition "Succeeded or Failed" May 21 15:59:33.342: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c container projected-configmap-volume-test: STEP: delete the pod May 21 15:59:33.357: INFO: Waiting for pod pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c to disappear May 21 15:59:33.359: INFO: Pod pod-projected-configmaps-c7688881-ac2b-43fd-865d-36f23e18667c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:33.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9382" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:24.018: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 15:59:28.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4871 --namespace=crd-publish-openapi-4871 create -f -' May 21 15:59:28.927: INFO: stderr: "" May 21 15:59:28.927: INFO: stdout: "e2e-test-crd-publish-openapi-8667-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 21 15:59:28.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4871 --namespace=crd-publish-openapi-4871 delete e2e-test-crd-publish-openapi-8667-crds test-cr' May 21 15:59:29.051: INFO: stderr: "" May 21 15:59:29.051: INFO: stdout: "e2e-test-crd-publish-openapi-8667-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 21 15:59:29.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4871 --namespace=crd-publish-openapi-4871 apply -f -' May 21 15:59:29.299: INFO: stderr: "" May 21 15:59:29.300: INFO: stdout: "e2e-test-crd-publish-openapi-8667-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 21 15:59:29.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4871 --namespace=crd-publish-openapi-4871 delete e2e-test-crd-publish-openapi-8667-crds test-cr' May 21 15:59:29.431: INFO: stderr: "" May 21 15:59:29.431: INFO: stdout: "e2e-test-crd-publish-openapi-8667-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 21 15:59:29.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4871 explain e2e-test-crd-publish-openapi-8667-crds' May 21 15:59:29.699: INFO: stderr: "" May 21 15:59:29.699: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8667-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:33.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4871" for this suite. • [SLOW TEST:9.652 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":15,"skipped":255,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:31.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0521 15:58:32.548115 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 15:59:34.565: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:34.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4745" for this suite. • [SLOW TEST:63.093 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:34.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-75b08764-88fc-42b7-8a1b-a40d03b22319 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:34.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2972" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:34.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:34.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4899" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:33.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:35.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6599" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:35.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-c3fae713-ac31-4114-8634-2c6e39f80ab2 STEP: Creating a pod to test consume secrets May 21 15:59:35.652: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8" in namespace "projected-8517" to be "Succeeded or Failed" May 21 15:59:35.655: INFO: Pod "pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218198ms May 21 15:59:37.658: INFO: Pod "pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005704278s STEP: Saw pod success May 21 15:59:37.658: INFO: Pod "pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8" satisfied condition "Succeeded or Failed" May 21 15:59:37.661: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8 container projected-secret-volume-test: STEP: delete the pod May 21 15:59:37.675: INFO: Waiting for pod pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8 to disappear May 21 15:59:37.678: INFO: Pod pod-projected-secrets-b68e3c14-61ad-4724-9cd6-3ae0e4c1ced8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:37.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8517" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":104,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:11.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1621 STEP: creating service affinity-clusterip in namespace services-1621 STEP: creating replication controller affinity-clusterip in namespace services-1621 I0521 15:59:11.519840 18 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1621, replica count: 3 I0521 15:59:14.570286 18 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 15:59:17.570628 18 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 15:59:17.575: INFO: Creating new exec pod May 21 15:59:24.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-1621 exec execpod-affinityx659s -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 21 15:59:24.796: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 21 15:59:24.796: INFO: stdout: "" May 21 15:59:24.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-1621 exec execpod-affinityx659s -- /bin/sh -x -c nc -zv -t -w 2 10.96.121.211 80' May 21 15:59:25.021: INFO: stderr: "+ nc -zv -t -w 2 10.96.121.211 80\nConnection to 10.96.121.211 80 port [tcp/http] succeeded!\n" May 21 15:59:25.021: INFO: stdout: "" May 21 15:59:25.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-1621 exec execpod-affinityx659s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.121.211:80/ ; done' May 21 15:59:25.331: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.211:80/\n" May 21 15:59:25.332: INFO: stdout: "\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2\naffinity-clusterip-57tq2" May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Received response from host: affinity-clusterip-57tq2 May 21 15:59:25.332: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1621, will wait for the garbage collector to delete the pods May 21 15:59:25.397: INFO: Deleting ReplicationController affinity-clusterip took: 4.724294ms May 21 15:59:25.497: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.31057ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:40.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1621" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.042 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":104,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:53.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2849 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 21 15:57:53.783: INFO: Found 0 stateful pods, waiting for 3 May 21 15:58:03.788: INFO: Found 2 stateful pods, waiting for 3 May 21 15:58:13.788: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:13.788: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:13.788: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 21 15:58:13.814: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 21 15:58:23.848: INFO: Updating stateful set ss2 May 21 15:58:23.855: INFO: Waiting for Pod statefulset-2849/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 21 15:58:33.881: INFO: Found 1 stateful pods, waiting for 3 May 21 15:58:43.885: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:43.885: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:43.885: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 21 15:58:53.885: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:53.885: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 15:58:53.885: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 21 15:58:53.910: INFO: Updating stateful set ss2 May 21 15:58:53.916: INFO: Waiting for Pod statefulset-2849/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 15:59:03.941: INFO: Updating stateful set ss2 May 21 15:59:03.947: INFO: Waiting for StatefulSet statefulset-2849/ss2 to complete update May 21 15:59:03.947: INFO: Waiting for Pod statefulset-2849/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 15:59:13.953: INFO: Waiting for StatefulSet statefulset-2849/ss2 to complete update May 21 15:59:13.954: INFO: Waiting for Pod statefulset-2849/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 15:59:23.954: INFO: Deleting all statefulset in ns statefulset-2849 May 21 15:59:23.958: INFO: Scaling statefulset ss2 to 0 May 21 15:59:43.975: INFO: Waiting for statefulset status.replicas updated to 0 May 21 15:59:43.977: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:43.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2849" for this suite. • [SLOW TEST:110.255 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:37.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 21 15:59:37.763: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 21 15:59:37.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:38.128: INFO: stderr: "" May 21 15:59:38.128: INFO: stdout: "service/agnhost-replica created\n" May 21 15:59:38.128: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 21 15:59:38.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:38.390: INFO: stderr: "" May 21 15:59:38.390: INFO: stdout: "service/agnhost-primary created\n" May 21 15:59:38.390: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 21 15:59:38.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:38.658: INFO: stderr: "" May 21 15:59:38.658: INFO: stdout: "service/frontend created\n" May 21 15:59:38.658: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 21 15:59:38.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:38.911: INFO: stderr: "" May 21 15:59:38.911: INFO: stdout: "deployment.apps/frontend created\n" May 21 15:59:38.911: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 21 15:59:38.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:39.168: INFO: stderr: "" May 21 15:59:39.168: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 21 15:59:39.169: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 21 15:59:39.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 create -f -' May 21 15:59:39.427: INFO: stderr: "" May 21 15:59:39.427: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 21 15:59:39.427: INFO: Waiting for all frontend pods to be Running. May 21 15:59:44.478: INFO: Waiting for frontend to serve content. May 21 15:59:44.488: INFO: Trying to add a new entry to the guestbook. May 21 15:59:44.500: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 21 15:59:44.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:44.637: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:44.637: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 21 15:59:44.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:44.758: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:44.758: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 21 15:59:44.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:44.880: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:44.881: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 21 15:59:44.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:45.002: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:45.002: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 21 15:59:45.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:45.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:45.128: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 21 15:59:45.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8427 delete --grace-period=0 --force -f -' May 21 15:59:45.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 15:59:45.257: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:45.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8427" for this suite. • [SLOW TEST:7.529 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":11,"skipped":386,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:44.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 21 15:59:44.035: INFO: Waiting up to 5m0s for pod "pod-2023efce-59b2-462b-bd21-6dd9b3bb5879" in namespace "emptydir-1225" to be "Succeeded or Failed" May 21 15:59:44.037: INFO: Pod "pod-2023efce-59b2-462b-bd21-6dd9b3bb5879": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24458ms May 21 15:59:46.041: INFO: Pod "pod-2023efce-59b2-462b-bd21-6dd9b3bb5879": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005958838s STEP: Saw pod success May 21 15:59:46.041: INFO: Pod "pod-2023efce-59b2-462b-bd21-6dd9b3bb5879" satisfied condition "Succeeded or Failed" May 21 15:59:46.044: INFO: Trying to get logs from node kali-worker pod pod-2023efce-59b2-462b-bd21-6dd9b3bb5879 container test-container: STEP: delete the pod May 21 15:59:46.058: INFO: Waiting for pod pod-2023efce-59b2-462b-bd21-6dd9b3bb5879 to disappear May 21 15:59:46.060: INFO: Pod pod-2023efce-59b2-462b-bd21-6dd9b3bb5879 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:46.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1225" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:24.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-cjnl STEP: Creating a pod to test atomic-volume-subpath May 21 15:59:24.992: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cjnl" in namespace "subpath-6683" to be "Succeeded or Failed" May 21 15:59:24.995: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.445525ms May 21 15:59:26.998: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005919357s May 21 15:59:29.001: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 4.008801269s May 21 15:59:31.004: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 6.011946383s May 21 15:59:33.008: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 8.015447443s May 21 15:59:35.012: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 10.019620072s May 21 15:59:37.015: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 12.022839146s May 21 15:59:39.019: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 14.026610834s May 21 15:59:41.023: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 16.030625171s May 21 15:59:43.027: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 18.034447639s May 21 15:59:45.030: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Running", Reason="", readiness=true. Elapsed: 20.037764362s May 21 15:59:47.033: INFO: Pod "pod-subpath-test-projected-cjnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.041087021s STEP: Saw pod success May 21 15:59:47.034: INFO: Pod "pod-subpath-test-projected-cjnl" satisfied condition "Succeeded or Failed" May 21 15:59:47.036: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-cjnl container test-container-subpath-projected-cjnl: STEP: delete the pod May 21 15:59:47.049: INFO: Waiting for pod pod-subpath-test-projected-cjnl to disappear May 21 15:59:47.052: INFO: Pod pod-subpath-test-projected-cjnl no longer exists STEP: Deleting pod pod-subpath-test-projected-cjnl May 21 15:59:47.052: INFO: Deleting pod "pod-subpath-test-projected-cjnl" in namespace "subpath-6683" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:47.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6683" for this suite. • [SLOW TEST:22.104 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:46.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8446/secret-test-f38fc478-4fe7-4bde-86f9-f0564890c061 STEP: Creating a pod to test consume secrets May 21 15:59:46.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095" in namespace "secrets-8446" to be "Succeeded or Failed" May 21 15:59:46.110: INFO: Pod "pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095": Phase="Pending", Reason="", readiness=false. Elapsed: 5.477741ms May 21 15:59:48.114: INFO: Pod "pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009529267s May 21 15:59:50.118: INFO: Pod "pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013427977s STEP: Saw pod success May 21 15:59:50.118: INFO: Pod "pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095" satisfied condition "Succeeded or Failed" May 21 15:59:50.121: INFO: Trying to get logs from node kali-worker pod pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095 container env-test: STEP: delete the pod May 21 15:59:50.136: INFO: Waiting for pod pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095 to disappear May 21 15:59:50.139: INFO: Pod pod-configmaps-11bfedcf-1248-4e36-85bc-15ef77b08095 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8446" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:50.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 15:59:50.196: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-33e96e32-83b4-4156-80fd-b259d6b9bee7" in namespace "security-context-test-6430" to be "Succeeded or Failed" May 21 15:59:50.199: INFO: Pod "alpine-nnp-false-33e96e32-83b4-4156-80fd-b259d6b9bee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248421ms May 21 15:59:52.202: INFO: Pod "alpine-nnp-false-33e96e32-83b4-4156-80fd-b259d6b9bee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005143853s May 21 15:59:54.205: INFO: Pod "alpine-nnp-false-33e96e32-83b4-4156-80fd-b259d6b9bee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008470767s May 21 15:59:54.205: INFO: Pod "alpine-nnp-false-33e96e32-83b4-4156-80fd-b259d6b9bee7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:54.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6430" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:54.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 21 15:59:54.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7746 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 21 15:59:54.383: INFO: stderr: "" May 21 15:59:54.383: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 21 15:59:54.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7746 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' May 21 15:59:54.714: INFO: stderr: "" May 21 15:59:54.714: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine May 21 15:59:54.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7746 delete pods e2e-test-httpd-pod' May 21 15:59:56.803: INFO: stderr: "" May 21 15:59:56.803: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:56.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7746" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:34.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 21 15:59:34.738: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:58.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5110" for this suite. • [SLOW TEST:23.987 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":7,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:58.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image May 21 15:59:58.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7076 create -f -' May 21 15:59:59.036: INFO: stderr: "" May 21 15:59:59.036: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 21 15:59:59.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7076 diff -f -' May 21 15:59:59.444: INFO: rc: 1 May 21 15:59:59.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7076 delete -f -' May 21 15:59:59.566: INFO: stderr: "" May 21 15:59:59.566: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 15:59:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7076" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":8,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:45.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:02.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5227" for this suite. • [SLOW TEST:17.073 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":12,"skipped":389,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:47.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:03.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4447" for this suite. • [SLOW TEST:16.075 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":5,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:03.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates May 21 16:00:03.284: INFO: created test-podtemplate-1 May 21 16:00:03.287: INFO: created test-podtemplate-2 May 21 16:00:03.291: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 21 16:00:03.293: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 21 16:00:03.306: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:03.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-578" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":6,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:40.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-hk4x STEP: Creating a pod to test atomic-volume-subpath May 21 15:59:40.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hk4x" in namespace "subpath-4204" to be "Succeeded or Failed" May 21 15:59:40.563: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121826ms May 21 15:59:42.567: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005484019s May 21 15:59:44.571: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 4.009559123s May 21 15:59:46.574: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 6.013268854s May 21 15:59:48.578: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 8.016951597s May 21 15:59:50.582: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 10.020427741s May 21 15:59:52.585: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 12.024030509s May 21 15:59:54.589: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 14.027706978s May 21 15:59:56.593: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 16.031648948s May 21 15:59:58.597: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 18.035714674s May 21 16:00:00.602: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 20.040551801s May 21 16:00:02.605: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Running", Reason="", readiness=true. Elapsed: 22.044264043s May 21 16:00:04.609: INFO: Pod "pod-subpath-test-secret-hk4x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.047668801s STEP: Saw pod success May 21 16:00:04.609: INFO: Pod "pod-subpath-test-secret-hk4x" satisfied condition "Succeeded or Failed" May 21 16:00:04.612: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-hk4x container test-container-subpath-secret-hk4x: STEP: delete the pod May 21 16:00:04.626: INFO: Waiting for pod pod-subpath-test-secret-hk4x to disappear May 21 16:00:04.629: INFO: Pod pod-subpath-test-secret-hk4x no longer exists STEP: Deleting pod pod-subpath-test-secret-hk4x May 21 16:00:04.629: INFO: Deleting pod "pod-subpath-test-secret-hk4x" in namespace "subpath-4204" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:04.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4204" for this suite. • [SLOW TEST:24.116 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":105,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:04.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7397/configmap-test-00ef610a-bcbe-496b-9b75-ebd26e5d3527 STEP: Creating a pod to test consume configMaps May 21 16:00:04.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4" in namespace "configmap-7397" to be "Succeeded or Failed" May 21 16:00:04.692: INFO: Pod "pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409838ms May 21 16:00:06.696: INFO: Pod "pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006626671s STEP: Saw pod success May 21 16:00:06.696: INFO: Pod "pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4" satisfied condition "Succeeded or Failed" May 21 16:00:06.699: INFO: Trying to get logs from node kali-worker pod pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4 container env-test: STEP: delete the pod May 21 16:00:06.714: INFO: Waiting for pod pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4 to disappear May 21 16:00:06.717: INFO: Pod pod-configmaps-4a298e15-53d2-4dcf-a808-a5989493f9c4 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:06.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7397" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":112,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:56.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:07.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2804" for this suite. • [SLOW TEST:11.090 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":7,"skipped":71,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:59.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:00:09.683: INFO: DNS probes using dns-8210/dns-test-41ec930f-4d06-4a6d-8aec-423c94f5b535 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:09.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8210" for this suite. • [SLOW TEST:10.090 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":9,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:07.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6fb245f0-bd79-4599-953e-1eefa526ba92 STEP: Creating a pod to test consume secrets May 21 16:00:07.981: INFO: Waiting up to 5m0s for pod "pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748" in namespace "secrets-7400" to be "Succeeded or Failed" May 21 16:00:07.984: INFO: Pod "pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576983ms May 21 16:00:09.987: INFO: Pod "pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006193452s STEP: Saw pod success May 21 16:00:09.987: INFO: Pod "pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748" satisfied condition "Succeeded or Failed" May 21 16:00:09.990: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748 container secret-env-test: STEP: delete the pod May 21 16:00:10.006: INFO: Waiting for pod pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748 to disappear May 21 16:00:10.009: INFO: Pod pod-secrets-099e9973-77ea-4a73-924e-5c77f9e39748 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:10.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7400" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:09.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 21 16:00:10.320: INFO: created pod pod-service-account-defaultsa May 21 16:00:10.320: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 21 16:00:10.324: INFO: created pod pod-service-account-mountsa May 21 16:00:10.324: INFO: pod pod-service-account-mountsa service account token volume mount: true May 21 16:00:10.328: INFO: created pod pod-service-account-nomountsa May 21 16:00:10.328: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 21 16:00:10.332: INFO: created pod pod-service-account-defaultsa-mountspec May 21 16:00:10.332: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 21 16:00:10.335: INFO: created pod pod-service-account-mountsa-mountspec May 21 16:00:10.335: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 21 16:00:10.339: INFO: created pod pod-service-account-nomountsa-mountspec May 21 16:00:10.339: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 21 16:00:10.343: INFO: created pod pod-service-account-defaultsa-nomountspec May 21 16:00:10.343: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 21 16:00:10.346: INFO: created pod pod-service-account-mountsa-nomountspec May 21 16:00:10.346: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 21 16:00:10.349: INFO: created pod pod-service-account-nomountsa-nomountspec May 21 16:00:10.349: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:10.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2497" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":10,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:10.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events May 21 16:00:10.490: INFO: created test-event-1 May 21 16:00:10.492: INFO: created test-event-2 May 21 16:00:10.495: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 21 16:00:10.497: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 21 16:00:10.509: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:10.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5761" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":11,"skipped":204,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:02.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 21 16:00:02.397: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 21 16:00:02.906: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 21 16:00:04.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 16:00:06.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 16:00:08.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209602, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 16:00:11.969: INFO: Waited 1.017062944s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:12.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4806" for this suite. • [SLOW TEST:10.444 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":13,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:10.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 21 16:00:10.147: INFO: Waiting up to 5m0s for pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5" in namespace "emptydir-617" to be "Succeeded or Failed" May 21 16:00:10.150: INFO: Pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.992629ms May 21 16:00:12.153: INFO: Pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006062233s May 21 16:00:14.156: INFO: Pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009780828s May 21 16:00:16.160: INFO: Pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013552768s STEP: Saw pod success May 21 16:00:16.160: INFO: Pod "pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5" satisfied condition "Succeeded or Failed" May 21 16:00:16.164: INFO: Trying to get logs from node kali-worker2 pod pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5 container test-container: STEP: delete the pod May 21 16:00:16.178: INFO: Waiting for pod pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5 to disappear May 21 16:00:16.180: INFO: Pod pod-430edb63-c3c5-4a9e-b66f-46e31e5b6ff5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:16.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-617" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:33.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9919, will wait for the garbage collector to delete the pods May 21 15:59:35.780: INFO: Deleting Job.batch foo took: 20.315575ms May 21 15:59:35.880: INFO: Terminating Job.batch foo pods took: 100.180635ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:21.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9919" for this suite. • [SLOW TEST:47.824 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":16,"skipped":268,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:10.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:00:10.548: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 21 16:00:15.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 create -f -' May 21 16:00:15.986: INFO: stderr: "" May 21 16:00:15.986: INFO: stdout: "e2e-test-crd-publish-openapi-6671-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 21 16:00:15.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 delete e2e-test-crd-publish-openapi-6671-crds test-foo' May 21 16:00:16.130: INFO: stderr: "" May 21 16:00:16.130: INFO: stdout: "e2e-test-crd-publish-openapi-6671-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 21 16:00:16.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 apply -f -' May 21 16:00:16.407: INFO: stderr: "" May 21 16:00:16.407: INFO: stdout: "e2e-test-crd-publish-openapi-6671-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 21 16:00:16.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 delete e2e-test-crd-publish-openapi-6671-crds test-foo' May 21 16:00:16.540: INFO: stderr: "" May 21 16:00:16.540: INFO: stdout: "e2e-test-crd-publish-openapi-6671-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 21 16:00:16.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 create -f -' May 21 16:00:16.798: INFO: rc: 1 May 21 16:00:16.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 apply -f -' May 21 16:00:17.038: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 21 16:00:17.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 create -f -' May 21 16:00:17.328: INFO: rc: 1 May 21 16:00:17.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 --namespace=crd-publish-openapi-2465 apply -f -' May 21 16:00:17.586: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 21 16:00:17.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 explain e2e-test-crd-publish-openapi-6671-crds' May 21 16:00:17.860: INFO: stderr: "" May 21 16:00:17.860: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6671-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 21 16:00:17.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 explain e2e-test-crd-publish-openapi-6671-crds.metadata' May 21 16:00:18.122: INFO: stderr: "" May 21 16:00:18.122: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6671-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 21 16:00:18.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 explain e2e-test-crd-publish-openapi-6671-crds.spec' May 21 16:00:18.390: INFO: stderr: "" May 21 16:00:18.390: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6671-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 21 16:00:18.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 explain e2e-test-crd-publish-openapi-6671-crds.spec.bars' May 21 16:00:18.661: INFO: stderr: "" May 21 16:00:18.661: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6671-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 21 16:00:18.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2465 explain e2e-test-crd-publish-openapi-6671-crds.spec.bars2' May 21 16:00:18.915: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:22.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2465" for this suite. • [SLOW TEST:12.435 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":12,"skipped":205,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:12.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628 May 21 16:00:12.957: INFO: Pod name my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628: Found 0 pods out of 1 May 21 16:00:17.961: INFO: Pod name my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628: Found 1 pods out of 1 May 21 16:00:17.961: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628" are running May 21 16:00:17.964: INFO: Pod "my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628-ssh2t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:00:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:00:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:00:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:00:12 +0000 UTC Reason: Message:}]) May 21 16:00:17.964: INFO: Trying to dial the pod May 21 16:00:22.975: INFO: Controller my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628: Got expected result from replica 1 [my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628-ssh2t]: "my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628-ssh2t", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8595" for this suite. • [SLOW TEST:10.059 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":14,"skipped":469,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:06.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5478 STEP: creating service affinity-nodeport in namespace services-5478 STEP: creating replication controller affinity-nodeport in namespace services-5478 I0521 16:00:06.809567 18 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5478, replica count: 3 I0521 16:00:09.860215 18 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:00:09.868: INFO: Creating new exec pod May 21 16:00:16.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-5478 exec execpod-affinity29kqr -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 21 16:00:17.135: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 21 16:00:17.135: INFO: stdout: "" May 21 16:00:17.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-5478 exec execpod-affinity29kqr -- /bin/sh -x -c nc -zv -t -w 2 10.96.0.182 80' May 21 16:00:17.382: INFO: stderr: "+ nc -zv -t -w 2 10.96.0.182 80\nConnection to 10.96.0.182 80 port [tcp/http] succeeded!\n" May 21 16:00:17.382: INFO: stdout: "" May 21 16:00:17.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-5478 exec execpod-affinity29kqr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.2 31158' May 21 16:00:17.601: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.2 31158\nConnection to 172.18.0.2 31158 port [tcp/31158] succeeded!\n" May 21 16:00:17.601: INFO: stdout: "" May 21 16:00:17.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-5478 exec execpod-affinity29kqr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31158' May 21 16:00:17.858: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31158\nConnection to 172.18.0.4 31158 port [tcp/31158] succeeded!\n" May 21 16:00:17.858: INFO: stdout: "" May 21 16:00:17.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-5478 exec execpod-affinity29kqr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.2:31158/ ; done' May 21 16:00:18.260: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:31158/\n" May 21 16:00:18.260: INFO: stdout: "\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz\naffinity-nodeport-pj7gz" May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Received response from host: affinity-nodeport-pj7gz May 21 16:00:18.260: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5478, will wait for the garbage collector to delete the pods May 21 16:00:18.326: INFO: Deleting ReplicationController affinity-nodeport took: 5.566208ms May 21 16:00:18.427: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.260938ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:23.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5478" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.013 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:16.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9b0f6b06-d736-49f4-81d7-6bfac7e989f3 STEP: Creating a pod to test consume configMaps May 21 16:00:16.309: INFO: Waiting up to 5m0s for pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1" in namespace "configmap-9937" to be "Succeeded or Failed" May 21 16:00:16.312: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840184ms May 21 16:00:18.315: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006070929s May 21 16:00:20.319: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009971334s May 21 16:00:22.323: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014038521s May 21 16:00:24.327: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018231454s STEP: Saw pod success May 21 16:00:24.327: INFO: Pod "pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1" satisfied condition "Succeeded or Failed" May 21 16:00:24.330: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1 container configmap-volume-test: STEP: delete the pod May 21 16:00:24.344: INFO: Waiting for pod pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1 to disappear May 21 16:00:24.347: INFO: Pod pod-configmaps-be50432c-ccb6-400e-a9ce-f8acc31411f1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:24.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9937" for this suite. • [SLOW TEST:8.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":189,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:57:48.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion May 21 15:57:48.861: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 15:57:48.863: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 21 15:59:49.390: INFO: Successfully updated pod "var-expansion-1ee01dfd-b517-4315-a856-d469b1ecbb53" STEP: waiting for pod running STEP: deleting the pod gracefully May 21 15:59:51.397: INFO: Deleting pod "var-expansion-1ee01dfd-b517-4315-a856-d469b1ecbb53" in namespace "var-expansion-1920" May 21 15:59:51.402: INFO: Wait up to 5m0s for pod "var-expansion-1ee01dfd-b517-4315-a856-d469b1ecbb53" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:25.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1920" for this suite. • [SLOW TEST:156.583 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:25.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:25.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5691" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:21.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:00:21.525: INFO: Creating deployment "test-recreate-deployment" May 21 16:00:21.529: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 21 16:00:21.535: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 21 16:00:23.546: INFO: Waiting deployment "test-recreate-deployment" to complete May 21 16:00:23.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209621, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209621, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209621, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209621, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 16:00:25.553: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 21 16:00:25.561: INFO: Updating deployment test-recreate-deployment May 21 16:00:25.561: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 21 16:00:25.618: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9005 /apis/apps/v1/namespaces/deployment-9005/deployments/test-recreate-deployment c13733b7-d0ee-46e0-b7e1-c535dcfea684 17662 2 2021-05-21 16:00:21 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-21 16:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 16:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00516a608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-05-21 16:00:25 +0000 UTC,LastTransitionTime:2021-05-21 16:00:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-05-21 16:00:25 +0000 UTC,LastTransitionTime:2021-05-21 16:00:21 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 21 16:00:25.621: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9005 /apis/apps/v1/namespaces/deployment-9005/replicasets/test-recreate-deployment-f79dd4667 b24dcb89-0742-4ab6-a6ab-d8537a30daff 17660 1 2021-05-21 16:00:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c13733b7-d0ee-46e0-b7e1-c535dcfea684 0xc00516ae10 0xc00516ae11}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c13733b7-d0ee-46e0-b7e1-c535dcfea684\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00516aed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 16:00:25.621: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 21 16:00:25.621: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-9005 /apis/apps/v1/namespaces/deployment-9005/replicasets/test-recreate-deployment-c96cf48f f69d4d56-cbc7-4710-b00f-16e2c914ad3c 17651 2 2021-05-21 16:00:21 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c13733b7-d0ee-46e0-b7e1-c535dcfea684 0xc00516ac6f 0xc00516ac80}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c13733b7-d0ee-46e0-b7e1-c535dcfea684\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00516ad38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 16:00:25.624: INFO: Pod "test-recreate-deployment-f79dd4667-rm7n6" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-rm7n6 test-recreate-deployment-f79dd4667- deployment-9005 /api/v1/namespaces/deployment-9005/pods/test-recreate-deployment-f79dd4667-rm7n6 3595b38c-eae0-4472-9487-2aac69d319a3 17657 0 2021-05-21 16:00:25 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 b24dcb89-0742-4ab6-a6ab-d8537a30daff 0xc00516b650 0xc00516b651}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b24dcb89-0742-4ab6-a6ab-d8537a30daff\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ct4tj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ct4tj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ct4tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:25.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9005" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":17,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-cf52726b-53fe-41d3-bf60-0a4a8694dee7 STEP: Creating a pod to test consume secrets May 21 16:00:23.014: INFO: Waiting up to 5m0s for pod "pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad" in namespace "secrets-7054" to be "Succeeded or Failed" May 21 16:00:23.016: INFO: Pod "pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531068ms May 21 16:00:25.020: INFO: Pod "pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005996754s May 21 16:00:27.024: INFO: Pod "pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009888997s STEP: Saw pod success May 21 16:00:27.024: INFO: Pod "pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad" satisfied condition "Succeeded or Failed" May 21 16:00:27.027: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad container secret-volume-test: STEP: delete the pod May 21 16:00:27.041: INFO: Waiting for pod pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad to disappear May 21 16:00:27.044: INFO: Pod pod-secrets-fcee870e-47d2-4937-8b77-f6c1e68770ad no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:27.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7054" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":215,"failed":0} SSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:22.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 21 16:00:29.549: INFO: Successfully updated pod "pod-update-637a4c15-6bd1-4896-99f2-0288a7939207" STEP: verifying the updated pod is in kubernetes May 21 16:00:29.555: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:29.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-269" for this suite. • [SLOW TEST:6.567 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:24.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:29.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5628" for this suite. • [SLOW TEST:5.259 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":11,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:29.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:29.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8797" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":16,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:29.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 21 16:00:29.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1180 /api/v1/namespaces/watch-1180/configmaps/e2e-watch-test-watch-closed 5bc16697-da61-414b-8a12-7922d8d8cec1 17906 0 2021-05-21 16:00:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:00:29.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1180 /api/v1/namespaces/watch-1180/configmaps/e2e-watch-test-watch-closed 5bc16697-da61-414b-8a12-7922d8d8cec1 17907 0 2021-05-21 16:00:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 21 16:00:29.875: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1180 /api/v1/namespaces/watch-1180/configmaps/e2e-watch-test-watch-closed 5bc16697-da61-414b-8a12-7922d8d8cec1 17910 0 2021-05-21 16:00:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:00:29.876: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1180 /api/v1/namespaces/watch-1180/configmaps/e2e-watch-test-watch-closed 5bc16697-da61-414b-8a12-7922d8d8cec1 17911 0 2021-05-21 16:00:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:29.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1180" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":12,"skipped":300,"failed":0} S ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":119,"failed":0} [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:23.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 21 16:00:29.811: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:29.811: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:29.944: INFO: Exec stderr: "" May 21 16:00:29.944: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:29.944: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.071: INFO: Exec stderr: "" May 21 16:00:30.071: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.071: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.205: INFO: Exec stderr: "" May 21 16:00:30.205: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.205: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.289: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 21 16:00:30.289: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.289: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.412: INFO: Exec stderr: "" May 21 16:00:30.412: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.412: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.493: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 21 16:00:30.493: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.493: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.595: INFO: Exec stderr: "" May 21 16:00:30.595: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.595: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.707: INFO: Exec stderr: "" May 21 16:00:30.707: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.707: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.819: INFO: Exec stderr: "" May 21 16:00:30.819: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3924 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:00:30.819: INFO: >>> kubeConfig: /root/.kube/config May 21 16:00:30.894: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:30.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3924" for this suite. • [SLOW TEST:7.148 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:30.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 21 16:00:30.987: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2995 /api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-resource-version 7b9ad114-442d-43fb-b504-6ef9372ecd21 17984 0 2021-05-21 16:00:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:00:30.987: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2995 /api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-resource-version 7b9ad114-442d-43fb-b504-6ef9372ecd21 17985 0 2021-05-21 16:00:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-21 16:00:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2995" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":15,"skipped":140,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:27.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:00:27.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa" in namespace "downward-api-6640" to be "Succeeded or Failed" May 21 16:00:27.099: INFO: Pod "downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351684ms May 21 16:00:29.103: INFO: Pod "downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006650084s May 21 16:00:31.108: INFO: Pod "downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011362664s STEP: Saw pod success May 21 16:00:31.108: INFO: Pod "downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa" satisfied condition "Succeeded or Failed" May 21 16:00:31.111: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa container client-container: STEP: delete the pod May 21 16:00:31.126: INFO: Waiting for pod downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa to disappear May 21 16:00:31.129: INFO: Pod downwardapi-volume-a2e646d7-ae0b-4f77-b6c1-92de914a11aa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:31.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6640" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:25.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:00:26.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:00:28.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209626, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209626, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209626, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209626, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:00:31.846: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:00:31.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1069-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:33.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4840" for this suite. STEP: Destroying namespace "webhook-4840-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.321 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":18,"skipped":330,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:33.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 16:00:33.100: INFO: Waiting up to 5m0s for pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6" in namespace "downward-api-7203" to be "Succeeded or Failed" May 21 16:00:33.103: INFO: Pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538333ms May 21 16:00:35.106: INFO: Pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00630737s May 21 16:00:37.111: INFO: Pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010591788s May 21 16:00:39.118: INFO: Pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01805952s STEP: Saw pod success May 21 16:00:39.118: INFO: Pod "downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6" satisfied condition "Succeeded or Failed" May 21 16:00:39.123: INFO: Trying to get logs from node kali-worker pod downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6 container dapi-container: STEP: delete the pod May 21 16:00:39.137: INFO: Waiting for pod downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6 to disappear May 21 16:00:39.140: INFO: Pod downward-api-ca5b4eae-75f8-4ccb-967e-7a9d9908a1c6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:39.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7203" for this suite. • [SLOW TEST:6.084 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:31.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:00:31.217: INFO: Creating deployment "webserver-deployment" May 21 16:00:31.221: INFO: Waiting for observed generation 1 May 21 16:00:33.228: INFO: Waiting for all required pods to come up May 21 16:00:33.233: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 21 16:00:39.242: INFO: Waiting for deployment "webserver-deployment" to complete May 21 16:00:39.248: INFO: Updating deployment "webserver-deployment" with a non-existent image May 21 16:00:39.257: INFO: Updating deployment webserver-deployment May 21 16:00:39.257: INFO: Waiting for observed generation 2 May 21 16:00:41.262: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 21 16:00:41.265: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 21 16:00:41.267: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 21 16:00:41.275: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 21 16:00:41.276: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 21 16:00:41.278: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 21 16:00:41.282: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 21 16:00:41.283: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 21 16:00:41.289: INFO: Updating deployment webserver-deployment May 21 16:00:41.289: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 21 16:00:41.294: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 21 16:00:41.297: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 21 16:00:41.303: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6688 /apis/apps/v1/namespaces/deployment-6688/deployments/webserver-deployment 4deb5f20-8bc5-41ec-bf9d-3eab7bad073c 18448 3 2021-05-21 16:00:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f10c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-21 16:00:36 +0000 UTC,LastTransitionTime:2021-05-21 16:00:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-05-21 16:00:39 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 21 16:00:41.307: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6688 /apis/apps/v1/namespaces/deployment-6688/replicasets/webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 18452 3 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4deb5f20-8bc5-41ec-bf9d-3eab7bad073c 0xc004f57367 0xc004f57368}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4deb5f20-8bc5-41ec-bf9d-3eab7bad073c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f573f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 16:00:41.307: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 21 16:00:41.307: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6688 /apis/apps/v1/namespaces/deployment-6688/replicasets/webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 18449 3 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4deb5f20-8bc5-41ec-bf9d-3eab7bad073c 0xc004f57467 0xc004f57468}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4deb5f20-8bc5-41ec-bf9d-3eab7bad073c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f574e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 21 16:00:41.314: INFO: Pod "webserver-deployment-795d758f88-4s8hx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4s8hx webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-4s8hx e4447dcd-7b7a-42cd-8fe8-3a0d7f9a0633 18407 0 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.118" ], "mac": "22:99:66:7b:9f:e3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.118" ], "mac": "22:99:66:7b:9f:e3", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a07d7 0xc0050a07d8}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-05-21 16:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.314: INFO: Pod "webserver-deployment-795d758f88-7n5tt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7n5tt webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-7n5tt c1d33c83-5bd6-40aa-889a-b9c49c99e3cd 18465 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a09b0 0xc0050a09b1}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.315: INFO: Pod "webserver-deployment-795d758f88-b9xwd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b9xwd webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-b9xwd fa903ad9-459f-4ff7-8bdd-f63287a1e430 18437 0 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.113" ], "mac": "46:03:2c:33:e4:04", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.113" ], "mac": "46:03:2c:33:e4:04", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a0b07 0xc0050a0b08}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-05-21 16:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.315: INFO: Pod "webserver-deployment-795d758f88-cq5tx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cq5tx webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-cq5tx 79bfc5e2-fd61-4613-b635-62070bd235a7 18405 0 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.112" ], "mac": "4a:46:e0:3b:a2:6d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.112" ], "mac": "4a:46:e0:3b:a2:6d", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a0d80 0xc0050a0d81}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.315: INFO: Pod "webserver-deployment-795d758f88-gjk7l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gjk7l webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-gjk7l 838d0635-13fc-475a-ba2b-5ac2d64284ff 18409 0 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.117" ], "mac": "2a:f6:40:90:f9:88", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.117" ], "mac": "2a:f6:40:90:f9:88", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a0f20 0xc0050a0f21}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-05-21 16:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.316: INFO: Pod "webserver-deployment-795d758f88-j8bq7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j8bq7 webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-j8bq7 2c5e5244-612f-42a1-9888-110f40773967 18462 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a1150 0xc0050a1151}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.316: INFO: Pod "webserver-deployment-795d758f88-qr8hj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qr8hj webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-qr8hj cda21799-59cb-4564-9910-07361b6c1b9e 18464 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a1287 0xc0050a1288}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.316: INFO: Pod "webserver-deployment-795d758f88-th2gq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-th2gq webserver-deployment-795d758f88- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-795d758f88-th2gq ed7b1ed6-c431-408d-9da3-6bba58044bf8 18408 0 2021-05-21 16:00:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.119" ], "mac": "b6:35:d2:81:4d:03", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.119" ], "mac": "b6:35:d2:81:4d:03", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0f277e6a-9c6b-4481-b108-ead3e5d9b12a 0xc0050a1410 0xc0050a1411}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-21 16:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-05-21 16:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-05-21 16:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.317: INFO: Pod "webserver-deployment-dd94f59b7-54nvk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-54nvk webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-54nvk c04982d9-35c2-4d0a-8308-711201f88d7a 18469 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1630 0xc0050a1631}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.317: INFO: Pod "webserver-deployment-dd94f59b7-7jz2c" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7jz2c webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-7jz2c b4770f16-bff5-4e68-9b34-5687e0d2af62 18166 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.113" ], "mac": "12:8c:90:65:4e:4b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.113" ], "mac": "12:8c:90:65:4e:4b", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1760 0xc0050a1761}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.113\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.113,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://df08f21d7ad4ba812bb69cd428146f3b1f23dbb8e2ca7cc91e065d7b04eee0e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.317: INFO: Pod "webserver-deployment-dd94f59b7-7tk9n" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7tk9n webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-7tk9n acfed715-c129-4403-9aab-9dea85a55cfc 18468 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a19a0 0xc0050a19a1}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.318: INFO: Pod "webserver-deployment-dd94f59b7-bg684" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bg684 webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-bg684 6cca0ca3-b89e-4a57-872e-ade2a71a78c3 18466 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1aa7 0xc0050a1aa8}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.318: INFO: Pod "webserver-deployment-dd94f59b7-bhnsx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bhnsx webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-bhnsx f79cf742-2be3-44d9-a21e-d7863d35e3b6 18467 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1bf7 0xc0050a1bf8}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.319: INFO: Pod "webserver-deployment-dd94f59b7-ddq48" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ddq48 webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-ddq48 0a84a6fe-20b5-4d38-a7e6-151160183f5a 18284 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.106" ], "mac": "6e:9a:02:60:43:b6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.106" ], "mac": "6e:9a:02:60:43:b6", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1d17 0xc0050a1d18}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.106\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.106,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7bd1391e0ad5b015ed5531aa7368339086139a6f640ceff4a45e19cb1073118a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.106,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.319: INFO: Pod "webserver-deployment-dd94f59b7-h6d6c" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h6d6c webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-h6d6c 31ec00db-b27e-40af-8af7-2b045cd2b714 18172 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.115" ], "mac": "16:87:13:36:ee:2f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.115" ], "mac": "16:87:13:36:ee:2f", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050a1f80 0xc0050a1f81}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.115,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5095bd00bda33f2a101dd7e79303660212d8eb6200811424302ace9f20554977,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.320: INFO: Pod "webserver-deployment-dd94f59b7-k62lf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-k62lf webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-k62lf c01c1814-9d42-496e-bc28-2fb8dab77b5f 18303 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.110" ], "mac": "2e:9f:74:ea:82:56", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.110" ], "mac": "2e:9f:74:ea:82:56", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ea1b0 0xc0050ea1b1}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.110\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.110,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9bd770eb2c35834ca5857634e7b0c33f75d8250e1b0a047e9edb18c6e4a6c3fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.320: INFO: Pod "webserver-deployment-dd94f59b7-lpvwc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lpvwc webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-lpvwc 1a8f8094-506a-4e5e-8a47-167d326d90fb 18250 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.109" ], "mac": "7a:17:72:a7:cf:cb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.109" ], "mac": "7a:17:72:a7:cf:cb", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ea450 0xc0050ea451}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.109\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.109,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89aaab8b50f2f0f38c8a171bf4004eecea36b2f8eb520e5d01aa4dda56febba3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.320: INFO: Pod "webserver-deployment-dd94f59b7-mg78h" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mg78h webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-mg78h efcd9340-534e-4db7-afd1-6654faa46eda 18457 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ea690 0xc0050ea691}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.321: INFO: Pod "webserver-deployment-dd94f59b7-px2r9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-px2r9 webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-px2r9 55ecb0c1-eb7e-4cfc-9c20-8ad3757ba400 18460 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ea810 0xc0050ea811}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.321: INFO: Pod "webserver-deployment-dd94f59b7-tbmnm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tbmnm webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-tbmnm 711ad596-8f1f-4358-a653-935f680321ac 18195 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.114" ], "mac": "ce:83:9a:1e:36:d1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.114" ], "mac": "ce:83:9a:1e:36:d1", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ea970 0xc0050ea971}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.114\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.114,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://280aaad4364a8241a59911b9d71201c1360630cc7d8beffe821ad296f3bcfe4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.321: INFO: Pod "webserver-deployment-dd94f59b7-wvtvp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wvtvp webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-wvtvp 71a9726b-8506-461a-ab76-398f4fb95b4c 18461 0 2021-05-21 16:00:41 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050eaba0 0xc0050eaba1}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.322: INFO: Pod "webserver-deployment-dd94f59b7-wwpgm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wwpgm webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-wwpgm 4185fd32-81fe-4ac1-964e-2b9781afd97d 18297 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.107" ], "mac": "2a:80:a7:c0:03:9e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.107" ], "mac": "2a:80:a7:c0:03:9e", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050ead17 0xc0050ead18}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.107,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4327a4898ef3b6819e3487f77f5fc039204bcd286ddf2a92410c662d26998709,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:00:41.322: INFO: Pod "webserver-deployment-dd94f59b7-zrxf2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zrxf2 webserver-deployment-dd94f59b7- deployment-6688 /api/v1/namespaces/deployment-6688/pods/webserver-deployment-dd94f59b7-zrxf2 778f20b6-dada-4a37-bbf7-34ae7c9b7c59 18287 0 2021-05-21 16:00:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.108" ], "mac": "1a:22:c8:3e:7b:5c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.108" ], "mac": "1a:22:c8:3e:7b:5c", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 61652710-4641-4c17-92b8-e41035f0d1d8 0xc0050eaee0 0xc0050eaee1}] [] [{kube-controller-manager Update v1 2021-05-21 16:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61652710-4641-4c17-92b8-e41035f0d1d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:00:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4sn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4sn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4sn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:00:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.108,StartTime:2021-05-21 16:00:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://192898cc1996aff8bbd813be69557394c0a4a0aaaba7608181be41f7ecf6196a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:41.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6688" for this suite. • [SLOW TEST:10.142 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":15,"skipped":248,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:30.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 16:00:39.559: INFO: Successfully updated pod "labelsupdate747b0784-55fd-46d2-a379-0db97662c275" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:41.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4631" for this suite. • [SLOW TEST:10.582 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":141,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:29.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:42.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1985" for this suite. • [SLOW TEST:13.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":17,"skipped":557,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:39.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-51fa18d8-2feb-480c-a873-060228259fa4 STEP: Creating a pod to test consume configMaps May 21 16:00:39.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e" in namespace "projected-1096" to be "Succeeded or Failed" May 21 16:00:39.250: INFO: Pod "pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744806ms May 21 16:00:41.254: INFO: Pod "pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006610478s May 21 16:00:43.258: INFO: Pod "pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011177535s STEP: Saw pod success May 21 16:00:43.258: INFO: Pod "pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e" satisfied condition "Succeeded or Failed" May 21 16:00:43.262: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e container projected-configmap-volume-test: STEP: delete the pod May 21 16:00:43.277: INFO: Waiting for pod pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e to disappear May 21 16:00:43.279: INFO: Pod pod-projected-configmaps-27325aac-0eff-4977-acd0-24fe1d0d787e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:43.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1096" for this suite. • ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:03.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7359 May 21 16:00:05.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 21 16:00:05.632: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 21 16:00:05.632: INFO: stdout: "iptables" May 21 16:00:05.632: INFO: proxyMode: iptables May 21 16:00:05.638: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:00:05.644: INFO: Pod kube-proxy-mode-detector still exists May 21 16:00:07.644: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:00:07.647: INFO: Pod kube-proxy-mode-detector still exists May 21 16:00:09.644: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:00:09.647: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7359 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7359 I0521 16:00:09.661487 23 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7359, replica count: 3 I0521 16:00:12.712035 23 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 16:00:15.712252 23 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 16:00:18.712573 23 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:00:18.719: INFO: Creating new exec pod May 21 16:00:23.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec execpod-affinitynpt79 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 21 16:00:23.998: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 21 16:00:23.998: INFO: stdout: "" May 21 16:00:23.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec execpod-affinitynpt79 -- /bin/sh -x -c nc -zv -t -w 2 10.96.232.65 80' May 21 16:00:24.236: INFO: stderr: "+ nc -zv -t -w 2 10.96.232.65 80\nConnection to 10.96.232.65 80 port [tcp/http] succeeded!\n" May 21 16:00:24.236: INFO: stdout: "" May 21 16:00:24.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec execpod-affinitynpt79 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.232.65:80/ ; done' May 21 16:00:24.618: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n" May 21 16:00:24.618: INFO: stdout: "\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg\naffinity-clusterip-timeout-vvffg" May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Received response from host: affinity-clusterip-timeout-vvffg May 21 16:00:24.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec execpod-affinitynpt79 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.232.65:80/' May 21 16:00:24.846: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n" May 21 16:00:24.846: INFO: stdout: "affinity-clusterip-timeout-vvffg" May 21 16:00:39.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7359 exec execpod-affinitynpt79 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.232.65:80/' May 21 16:00:40.114: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.232.65:80/\n" May 21 16:00:40.114: INFO: stdout: "affinity-clusterip-timeout-xn69f" May 21 16:00:40.114: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7359, will wait for the garbage collector to delete the pods May 21 16:00:40.183: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.051051ms May 21 16:00:40.283: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.340145ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:48.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7359" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:45.538 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:48.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 21 16:00:49.001: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 21 16:00:49.005: INFO: starting watch STEP: patching STEP: updating May 21 16:00:49.017: INFO: waiting for watch events with expected annotations May 21 16:00:49.017: INFO: missing expected annotations, waiting: map[string]string(nil) May 21 16:00:49.018: INFO: missing expected annotations, waiting: map[string]string(nil) May 21 16:00:49.018: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:49.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5616" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":208,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:41.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-32be6dd2-8c21-4b25-933f-c84129f6da7c STEP: Creating a pod to test consume configMaps May 21 16:00:41.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9" in namespace "configmap-7082" to be "Succeeded or Failed" May 21 16:00:41.639: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127498ms May 21 16:00:43.642: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00512872s May 21 16:00:45.646: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009112788s May 21 16:00:47.649: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012789971s May 21 16:00:49.653: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016196884s STEP: Saw pod success May 21 16:00:49.653: INFO: Pod "pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9" satisfied condition "Succeeded or Failed" May 21 16:00:49.655: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9 container configmap-volume-test: STEP: delete the pod May 21 16:00:49.669: INFO: Waiting for pod pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9 to disappear May 21 16:00:49.671: INFO: Pod pod-configmaps-20d82d96-ccaf-4402-aa51-6ce5bb7827a9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7082" for this suite. • [SLOW TEST:8.071 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":155,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:42.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 21 16:00:42.992: INFO: Waiting up to 5m0s for pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc" in namespace "var-expansion-629" to be "Succeeded or Failed" May 21 16:00:42.995: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.289364ms May 21 16:00:44.999: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007038321s May 21 16:00:47.002: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010344511s May 21 16:00:49.005: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013002497s May 21 16:00:51.008: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016471215s STEP: Saw pod success May 21 16:00:51.009: INFO: Pod "var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc" satisfied condition "Succeeded or Failed" May 21 16:00:51.012: INFO: Trying to get logs from node kali-worker pod var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc container dapi-container: STEP: delete the pod May 21 16:00:51.026: INFO: Waiting for pod var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc to disappear May 21 16:00:51.028: INFO: Pod var-expansion-39891a87-bc45-4869-bc85-3731856ab8dc no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:51.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-629" for this suite. • [SLOW TEST:8.081 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":565,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":365,"failed":0} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:43.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:00:43.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1" in namespace "projected-8641" to be "Succeeded or Failed" May 21 16:00:43.325: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21633ms May 21 16:00:45.328: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005751324s May 21 16:00:47.332: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009399179s May 21 16:00:49.336: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012933397s May 21 16:00:51.340: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016928845s May 21 16:00:53.344: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.020877889s STEP: Saw pod success May 21 16:00:53.344: INFO: Pod "downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1" satisfied condition "Succeeded or Failed" May 21 16:00:53.347: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1 container client-container: STEP: delete the pod May 21 16:00:53.361: INFO: Waiting for pod downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1 to disappear May 21 16:00:53.363: INFO: Pod downwardapi-volume-91cc796b-d068-4c92-9fee-6f6395f70dd1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:53.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8641" for this suite. • [SLOW TEST:10.083 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:53.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:53.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7902" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":22,"skipped":396,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:25.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-mqn7 STEP: Creating a pod to test atomic-volume-subpath May 21 16:00:25.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mqn7" in namespace "subpath-4649" to be "Succeeded or Failed" May 21 16:00:25.553: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713602ms May 21 16:00:27.557: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006576187s May 21 16:00:29.560: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009567661s May 21 16:00:31.564: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 6.01319531s May 21 16:00:33.568: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 8.017520174s May 21 16:00:35.573: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 10.022675172s May 21 16:00:37.578: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 12.0268536s May 21 16:00:39.581: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 14.030111146s May 21 16:00:41.584: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 16.033146784s May 21 16:00:43.588: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 18.036865181s May 21 16:00:45.592: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 20.040823646s May 21 16:00:47.595: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 22.044620444s May 21 16:00:49.599: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 24.048496891s May 21 16:00:51.603: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Running", Reason="", readiness=true. Elapsed: 26.052193764s May 21 16:00:53.606: INFO: Pod "pod-subpath-test-configmap-mqn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.055724321s STEP: Saw pod success May 21 16:00:53.606: INFO: Pod "pod-subpath-test-configmap-mqn7" satisfied condition "Succeeded or Failed" May 21 16:00:53.609: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-mqn7 container test-container-subpath-configmap-mqn7: STEP: delete the pod May 21 16:00:53.623: INFO: Waiting for pod pod-subpath-test-configmap-mqn7 to disappear May 21 16:00:53.625: INFO: Pod pod-subpath-test-configmap-mqn7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-mqn7 May 21 16:00:53.625: INFO: Deleting pod "pod-subpath-test-configmap-mqn7" in namespace "subpath-4649" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4649" for this suite. • [SLOW TEST:28.131 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:49.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-527d0041-9fd0-41d6-8ba1-ee371e8a4cf1 STEP: Creating a pod to test consume configMaps May 21 16:00:49.721: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88" in namespace "projected-6969" to be "Succeeded or Failed" May 21 16:00:49.723: INFO: Pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27917ms May 21 16:00:51.727: INFO: Pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006194016s May 21 16:00:53.730: INFO: Pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009502768s May 21 16:00:55.734: INFO: Pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013626763s STEP: Saw pod success May 21 16:00:55.734: INFO: Pod "pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88" satisfied condition "Succeeded or Failed" May 21 16:00:55.737: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88 container projected-configmap-volume-test: STEP: delete the pod May 21 16:00:55.758: INFO: Waiting for pod pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88 to disappear May 21 16:00:55.761: INFO: Pod pod-projected-configmaps-331e52a3-6cf7-4583-9ea2-53577268fa88 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6969" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":158,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:55.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:55.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6475" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":19,"skipped":171,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:29.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-qc4l STEP: Creating a pod to test atomic-volume-subpath May 21 16:00:29.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qc4l" in namespace "subpath-7156" to be "Succeeded or Failed" May 21 16:00:29.934: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75058ms May 21 16:00:31.938: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006578231s May 21 16:00:33.942: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010494166s May 21 16:00:35.945: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 6.0138185s May 21 16:00:37.949: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 8.017553555s May 21 16:00:39.958: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 10.027199374s May 21 16:00:41.961: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 12.030306399s May 21 16:00:43.964: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 14.033227376s May 21 16:00:45.968: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 16.03692632s May 21 16:00:48.008: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 18.077007037s May 21 16:00:50.012: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 20.080727247s May 21 16:00:52.016: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 22.084747165s May 21 16:00:54.019: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Running", Reason="", readiness=true. Elapsed: 24.087878616s May 21 16:00:56.023: INFO: Pod "pod-subpath-test-configmap-qc4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.091485985s STEP: Saw pod success May 21 16:00:56.023: INFO: Pod "pod-subpath-test-configmap-qc4l" satisfied condition "Succeeded or Failed" May 21 16:00:56.025: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-qc4l container test-container-subpath-configmap-qc4l: STEP: delete the pod May 21 16:00:56.039: INFO: Waiting for pod pod-subpath-test-configmap-qc4l to disappear May 21 16:00:56.041: INFO: Pod pod-subpath-test-configmap-qc4l no longer exists STEP: Deleting pod pod-subpath-test-configmap-qc4l May 21 16:00:56.041: INFO: Deleting pod "pod-subpath-test-configmap-qc4l" in namespace "subpath-7156" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:56.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7156" for this suite. • [SLOW TEST:26.164 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":301,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:53.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:00:53.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b" in namespace "projected-2615" to be "Succeeded or Failed" May 21 16:00:53.568: INFO: Pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462654ms May 21 16:00:55.572: INFO: Pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006211737s May 21 16:00:57.580: INFO: Pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014352339s May 21 16:00:59.584: INFO: Pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01836094s STEP: Saw pod success May 21 16:00:59.584: INFO: Pod "downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b" satisfied condition "Succeeded or Failed" May 21 16:00:59.587: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b container client-container: STEP: delete the pod May 21 16:00:59.599: INFO: Waiting for pod downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b to disappear May 21 16:00:59.601: INFO: Pod downwardapi-volume-f85db5eb-1780-47f2-a249-c8d8907f655b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:00:59.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2615" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:55.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1512 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 21 16:00:55.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6437 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' May 21 16:00:56.051: INFO: stderr: "" May 21 16:00:56.051: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 May 21 16:00:56.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6437 delete pods e2e-test-httpd-pod' May 21 16:01:00.643: INFO: stderr: "" May 21 16:01:00.643: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:00.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6437" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":20,"skipped":177,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:56.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:00:56.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65" in namespace "downward-api-2678" to be "Succeeded or Failed" May 21 16:00:56.100: INFO: Pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65": Phase="Pending", Reason="", readiness=false. Elapsed: 1.790987ms May 21 16:00:58.103: INFO: Pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005361252s May 21 16:01:00.107: INFO: Pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009248681s May 21 16:01:02.112: INFO: Pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014514067s STEP: Saw pod success May 21 16:01:02.112: INFO: Pod "downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65" satisfied condition "Succeeded or Failed" May 21 16:01:02.116: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65 container client-container: STEP: delete the pod May 21 16:01:02.131: INFO: Waiting for pod downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65 to disappear May 21 16:01:02.133: INFO: Pod downwardapi-volume-c9ed12dd-c7c1-42b3-8ed8-31b03ffe6c65 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:02.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2678" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":304,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:02.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 21 16:01:06.198: INFO: Pod pod-hostip-d2492b22-542f-420d-964a-e6a580d7696b has hostIP: 172.18.0.2 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:06.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6919" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":308,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:49.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:00:49.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:00:51.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209649, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209649, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209649, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209649, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:00:54.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:06.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9408" for this suite. STEP: Destroying namespace "webhook-9408-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:41.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-9d428595-7684-4141-8dd3-f1d68efadda1 in namespace container-probe-477 May 21 16:00:51.389: INFO: Started pod liveness-9d428595-7684-4141-8dd3-f1d68efadda1 in namespace container-probe-477 STEP: checking the pod's current state and verifying that restartCount is present May 21 16:00:51.392: INFO: Initial restart count of pod liveness-9d428595-7684-4141-8dd3-f1d68efadda1 is 0 May 21 16:01:07.425: INFO: Restart count of pod container-probe-477/liveness-9d428595-7684-4141-8dd3-f1d68efadda1 is now 1 (16.032902612s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:07.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-477" for this suite. • [SLOW TEST:26.092 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:06.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 21 16:01:06.253: INFO: Waiting up to 5m0s for pod "pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7" in namespace "emptydir-2402" to be "Succeeded or Failed" May 21 16:01:06.256: INFO: Pod "pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014429ms May 21 16:01:08.259: INFO: Pod "pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006244567s STEP: Saw pod success May 21 16:01:08.259: INFO: Pod "pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7" satisfied condition "Succeeded or Failed" May 21 16:01:08.262: INFO: Trying to get logs from node kali-worker pod pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7 container test-container: STEP: delete the pod May 21 16:01:08.275: INFO: Waiting for pod pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7 to disappear May 21 16:01:08.277: INFO: Pod pod-131a07fc-72b6-47ae-8c6d-3b70da1578d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:08.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2402" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":310,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":9,"skipped":220,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:06.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1307 STEP: creating the pod May 21 16:01:06.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 create -f -' May 21 16:01:07.277: INFO: stderr: "" May 21 16:01:07.277: INFO: stdout: "pod/pause created\n" May 21 16:01:07.277: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 21 16:01:07.277: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-174" to be "running and ready" May 21 16:01:07.280: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221973ms May 21 16:01:09.284: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006021304s May 21 16:01:09.284: INFO: Pod "pause" satisfied condition "running and ready" May 21 16:01:09.284: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 21 16:01:09.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 label pods pause testing-label=testing-label-value' May 21 16:01:09.430: INFO: stderr: "" May 21 16:01:09.430: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 21 16:01:09.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 get pod pause -L testing-label' May 21 16:01:09.556: INFO: stderr: "" May 21 16:01:09.556: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod May 21 16:01:09.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 label pods pause testing-label-' May 21 16:01:09.681: INFO: stderr: "" May 21 16:01:09.681: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 21 16:01:09.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 get pod pause -L testing-label' May 21 16:01:09.800: INFO: stderr: "" May 21 16:01:09.800: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 STEP: using delete to clean up resources May 21 16:01:09.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 delete --grace-period=0 --force -f -' May 21 16:01:09.937: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:01:09.937: INFO: stdout: "pod \"pause\" force deleted\n" May 21 16:01:09.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 get rc,svc -l name=pause --no-headers' May 21 16:01:10.070: INFO: stderr: "No resources found in kubectl-174 namespace.\n" May 21 16:01:10.070: INFO: stdout: "" May 21 16:01:10.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-174 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:01:10.199: INFO: stderr: "" May 21 16:01:10.199: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:10.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-174" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":10,"skipped":220,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:07.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 16:01:10.169: INFO: Successfully updated pod "annotationupdate5c9acc7c-307f-43fd-aeef-1abeb4aa7951" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:12.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5345" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:10.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 21 16:01:10.245: INFO: Waiting up to 5m0s for pod "client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f" in namespace "containers-902" to be "Succeeded or Failed" May 21 16:01:10.247: INFO: Pod "client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534812ms May 21 16:01:12.251: INFO: Pod "client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006470656s STEP: Saw pod success May 21 16:01:12.251: INFO: Pod "client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f" satisfied condition "Succeeded or Failed" May 21 16:01:12.255: INFO: Trying to get logs from node kali-worker pod client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f container test-container: STEP: delete the pod May 21 16:01:12.268: INFO: Waiting for pod client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f to disappear May 21 16:01:12.272: INFO: Pod client-containers-b0dfe8d4-d5d8-4fcc-acb6-5b07b5a22e1f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:12.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-902" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":221,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:12.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 21 16:01:12.288: INFO: Waiting up to 5m0s for pod "pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb" in namespace "emptydir-2099" to be "Succeeded or Failed" May 21 16:01:12.291: INFO: Pod "pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812343ms May 21 16:01:14.295: INFO: Pod "pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006670034s STEP: Saw pod success May 21 16:01:14.295: INFO: Pod "pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb" satisfied condition "Succeeded or Failed" May 21 16:01:14.298: INFO: Trying to get logs from node kali-worker2 pod pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb container test-container: STEP: delete the pod May 21 16:01:14.319: INFO: Waiting for pod pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb to disappear May 21 16:01:14.321: INFO: Pod pod-b0125b70-f6d8-4633-8c2e-330ce1c18ccb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:14.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2099" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:14.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 21 16:01:14.451: INFO: starting watch STEP: patching STEP: updating May 21 16:01:14.460: INFO: waiting for watch events with expected annotations May 21 16:01:14.460: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:14.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6889" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":19,"skipped":424,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:14.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:01:14.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c" in namespace "downward-api-9047" to be "Succeeded or Failed" May 21 16:01:14.529: INFO: Pod "downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396424ms May 21 16:01:16.532: INFO: Pod "downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005715594s STEP: Saw pod success May 21 16:01:16.532: INFO: Pod "downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c" satisfied condition "Succeeded or Failed" May 21 16:01:16.535: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c container client-container: STEP: delete the pod May 21 16:01:16.550: INFO: Waiting for pod downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c to disappear May 21 16:01:16.552: INFO: Pod downwardapi-volume-1a0dae76-b34e-451e-8eae-c828f5c2740c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:16.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9047" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:00.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:16.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5199" for this suite. • [SLOW TEST:16.108 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":21,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:08.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:08.314: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2745 I0521 16:01:08.335701 24 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2745, replica count: 1 I0521 16:01:09.386099 24 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:01:09.493: INFO: Created: latency-svc-n2j2t May 21 16:01:09.499: INFO: Got endpoints: latency-svc-n2j2t [12.733714ms] May 21 16:01:09.505: INFO: Created: latency-svc-j2448 May 21 16:01:09.508: INFO: Created: latency-svc-6rlhj May 21 16:01:09.508: INFO: Got endpoints: latency-svc-j2448 [9.53634ms] May 21 16:01:09.510: INFO: Created: latency-svc-s4np7 May 21 16:01:09.511: INFO: Got endpoints: latency-svc-6rlhj [11.971807ms] May 21 16:01:09.514: INFO: Created: latency-svc-r2v6l May 21 16:01:09.514: INFO: Got endpoints: latency-svc-s4np7 [15.015499ms] May 21 16:01:09.516: INFO: Created: latency-svc-cqnqk May 21 16:01:09.516: INFO: Got endpoints: latency-svc-r2v6l [17.026451ms] May 21 16:01:09.518: INFO: Created: latency-svc-kzd87 May 21 16:01:09.518: INFO: Got endpoints: latency-svc-cqnqk [18.866955ms] May 21 16:01:09.520: INFO: Created: latency-svc-ksf49 May 21 16:01:09.520: INFO: Got endpoints: latency-svc-kzd87 [21.160007ms] May 21 16:01:09.522: INFO: Got endpoints: latency-svc-ksf49 [23.179066ms] May 21 16:01:09.523: INFO: Created: latency-svc-dp7g2 May 21 16:01:09.524: INFO: Created: latency-svc-rsflz May 21 16:01:09.525: INFO: Got endpoints: latency-svc-dp7g2 [25.508217ms] May 21 16:01:09.527: INFO: Got endpoints: latency-svc-rsflz [28.041046ms] May 21 16:01:09.528: INFO: Created: latency-svc-65ww8 May 21 16:01:09.530: INFO: Created: latency-svc-zdfk4 May 21 16:01:09.531: INFO: Got endpoints: latency-svc-65ww8 [31.492826ms] May 21 16:01:09.532: INFO: Created: latency-svc-2269s May 21 16:01:09.533: INFO: Got endpoints: latency-svc-zdfk4 [34.096026ms] May 21 16:01:09.533: INFO: Created: latency-svc-72prp May 21 16:01:09.534: INFO: Got endpoints: latency-svc-2269s [6.848045ms] May 21 16:01:09.536: INFO: Created: latency-svc-h92rp May 21 16:01:09.536: INFO: Got endpoints: latency-svc-72prp [36.827924ms] May 21 16:01:09.538: INFO: Created: latency-svc-d8rdv May 21 16:01:09.539: INFO: Got endpoints: latency-svc-h92rp [39.98558ms] May 21 16:01:09.540: INFO: Got endpoints: latency-svc-d8rdv [40.830738ms] May 21 16:01:09.540: INFO: Created: latency-svc-zb4pr May 21 16:01:09.542: INFO: Created: latency-svc-hlcm9 May 21 16:01:09.542: INFO: Got endpoints: latency-svc-zb4pr [43.442161ms] May 21 16:01:09.544: INFO: Created: latency-svc-vjt7t May 21 16:01:09.544: INFO: Got endpoints: latency-svc-hlcm9 [35.835901ms] May 21 16:01:09.546: INFO: Created: latency-svc-k6drj May 21 16:01:09.547: INFO: Got endpoints: latency-svc-vjt7t [35.908007ms] May 21 16:01:09.548: INFO: Created: latency-svc-7kldj May 21 16:01:09.549: INFO: Got endpoints: latency-svc-k6drj [34.87723ms] May 21 16:01:09.550: INFO: Created: latency-svc-r7775 May 21 16:01:09.551: INFO: Got endpoints: latency-svc-7kldj [35.100741ms] May 21 16:01:09.552: INFO: Created: latency-svc-vgb5q May 21 16:01:09.553: INFO: Got endpoints: latency-svc-r7775 [34.857246ms] May 21 16:01:09.554: INFO: Created: latency-svc-jmtnp May 21 16:01:09.555: INFO: Got endpoints: latency-svc-vgb5q [34.943338ms] May 21 16:01:09.557: INFO: Created: latency-svc-hdqsx May 21 16:01:09.557: INFO: Got endpoints: latency-svc-jmtnp [34.611383ms] May 21 16:01:09.558: INFO: Created: latency-svc-6ms5v May 21 16:01:09.559: INFO: Got endpoints: latency-svc-hdqsx [34.729131ms] May 21 16:01:09.564: INFO: Created: latency-svc-mfwhl May 21 16:01:09.564: INFO: Got endpoints: latency-svc-6ms5v [33.724656ms] May 21 16:01:09.566: INFO: Got endpoints: latency-svc-mfwhl [33.090205ms] May 21 16:01:09.566: INFO: Created: latency-svc-8dpfr May 21 16:01:09.568: INFO: Created: latency-svc-cwxw7 May 21 16:01:09.569: INFO: Got endpoints: latency-svc-8dpfr [34.527556ms] May 21 16:01:09.570: INFO: Created: latency-svc-7spzd May 21 16:01:09.570: INFO: Got endpoints: latency-svc-cwxw7 [34.191004ms] May 21 16:01:09.571: INFO: Created: latency-svc-dxb4f May 21 16:01:09.572: INFO: Got endpoints: latency-svc-7spzd [32.975657ms] May 21 16:01:09.573: INFO: Created: latency-svc-5rfdf May 21 16:01:09.574: INFO: Got endpoints: latency-svc-dxb4f [33.561809ms] May 21 16:01:09.575: INFO: Created: latency-svc-54nph May 21 16:01:09.577: INFO: Created: latency-svc-hx4cg May 21 16:01:09.580: INFO: Created: latency-svc-85bmw May 21 16:01:09.581: INFO: Created: latency-svc-6zd5k May 21 16:01:09.583: INFO: Created: latency-svc-fzlff May 21 16:01:09.585: INFO: Created: latency-svc-sjhzd May 21 16:01:09.587: INFO: Created: latency-svc-fzbqq May 21 16:01:09.589: INFO: Created: latency-svc-6zmlg May 21 16:01:09.591: INFO: Created: latency-svc-z49m8 May 21 16:01:09.593: INFO: Created: latency-svc-9hdb4 May 21 16:01:09.595: INFO: Created: latency-svc-l6lp7 May 21 16:01:09.596: INFO: Got endpoints: latency-svc-5rfdf [53.346818ms] May 21 16:01:09.597: INFO: Created: latency-svc-d5d47 May 21 16:01:09.599: INFO: Created: latency-svc-w6gkh May 21 16:01:09.601: INFO: Created: latency-svc-kjvm4 May 21 16:01:09.603: INFO: Created: latency-svc-mc74q May 21 16:01:09.647: INFO: Got endpoints: latency-svc-54nph [103.000205ms] May 21 16:01:09.654: INFO: Created: latency-svc-nr5dt May 21 16:01:09.697: INFO: Got endpoints: latency-svc-hx4cg [150.039556ms] May 21 16:01:09.703: INFO: Created: latency-svc-wtv92 May 21 16:01:09.747: INFO: Got endpoints: latency-svc-85bmw [198.484187ms] May 21 16:01:09.754: INFO: Created: latency-svc-7djcz May 21 16:01:09.796: INFO: Got endpoints: latency-svc-6zd5k [244.971933ms] May 21 16:01:09.802: INFO: Created: latency-svc-62ds6 May 21 16:01:09.847: INFO: Got endpoints: latency-svc-fzlff [293.898601ms] May 21 16:01:09.854: INFO: Created: latency-svc-shgt6 May 21 16:01:09.897: INFO: Got endpoints: latency-svc-sjhzd [342.070569ms] May 21 16:01:09.905: INFO: Created: latency-svc-v8vd4 May 21 16:01:09.947: INFO: Got endpoints: latency-svc-fzbqq [390.282877ms] May 21 16:01:09.955: INFO: Created: latency-svc-jskpr May 21 16:01:09.997: INFO: Got endpoints: latency-svc-6zmlg [437.637913ms] May 21 16:01:10.008: INFO: Created: latency-svc-w9ldv May 21 16:01:10.048: INFO: Got endpoints: latency-svc-z49m8 [483.198725ms] May 21 16:01:10.054: INFO: Created: latency-svc-qzlk2 May 21 16:01:10.097: INFO: Got endpoints: latency-svc-9hdb4 [530.767874ms] May 21 16:01:10.104: INFO: Created: latency-svc-gjndb May 21 16:01:10.147: INFO: Got endpoints: latency-svc-l6lp7 [578.580685ms] May 21 16:01:10.154: INFO: Created: latency-svc-zvf55 May 21 16:01:10.197: INFO: Got endpoints: latency-svc-d5d47 [627.156302ms] May 21 16:01:10.204: INFO: Created: latency-svc-px27q May 21 16:01:10.246: INFO: Got endpoints: latency-svc-w6gkh [674.323023ms] May 21 16:01:10.254: INFO: Created: latency-svc-7z62p May 21 16:01:10.297: INFO: Got endpoints: latency-svc-kjvm4 [723.451322ms] May 21 16:01:10.304: INFO: Created: latency-svc-z8xwv May 21 16:01:10.347: INFO: Got endpoints: latency-svc-mc74q [751.45439ms] May 21 16:01:10.354: INFO: Created: latency-svc-tzzb6 May 21 16:01:10.397: INFO: Got endpoints: latency-svc-nr5dt [749.656032ms] May 21 16:01:10.404: INFO: Created: latency-svc-npcr9 May 21 16:01:10.447: INFO: Got endpoints: latency-svc-wtv92 [750.267367ms] May 21 16:01:10.455: INFO: Created: latency-svc-gqwmn May 21 16:01:10.497: INFO: Got endpoints: latency-svc-7djcz [749.641725ms] May 21 16:01:10.504: INFO: Created: latency-svc-lncw9 May 21 16:01:10.547: INFO: Got endpoints: latency-svc-62ds6 [750.977726ms] May 21 16:01:10.554: INFO: Created: latency-svc-n6kmd May 21 16:01:10.597: INFO: Got endpoints: latency-svc-shgt6 [750.148514ms] May 21 16:01:10.605: INFO: Created: latency-svc-6rc8s May 21 16:01:10.647: INFO: Got endpoints: latency-svc-v8vd4 [749.733597ms] May 21 16:01:10.655: INFO: Created: latency-svc-n2xg2 May 21 16:01:10.697: INFO: Got endpoints: latency-svc-jskpr [749.926586ms] May 21 16:01:10.703: INFO: Created: latency-svc-rm97d May 21 16:01:10.747: INFO: Got endpoints: latency-svc-w9ldv [749.507288ms] May 21 16:01:10.754: INFO: Created: latency-svc-tddfq May 21 16:01:10.797: INFO: Got endpoints: latency-svc-qzlk2 [749.354123ms] May 21 16:01:10.803: INFO: Created: latency-svc-gqwrb May 21 16:01:10.847: INFO: Got endpoints: latency-svc-gjndb [749.891531ms] May 21 16:01:10.855: INFO: Created: latency-svc-zb789 May 21 16:01:10.897: INFO: Got endpoints: latency-svc-zvf55 [750.202216ms] May 21 16:01:10.904: INFO: Created: latency-svc-4qzpk May 21 16:01:10.947: INFO: Got endpoints: latency-svc-px27q [749.605554ms] May 21 16:01:10.954: INFO: Created: latency-svc-nldss May 21 16:01:11.047: INFO: Got endpoints: latency-svc-7z62p [800.602934ms] May 21 16:01:11.055: INFO: Created: latency-svc-lnc2z May 21 16:01:11.097: INFO: Got endpoints: latency-svc-z8xwv [799.652306ms] May 21 16:01:11.104: INFO: Created: latency-svc-vq4zv May 21 16:01:11.147: INFO: Got endpoints: latency-svc-tzzb6 [799.585162ms] May 21 16:01:11.156: INFO: Created: latency-svc-79h4m May 21 16:01:11.197: INFO: Got endpoints: latency-svc-npcr9 [799.848352ms] May 21 16:01:11.204: INFO: Created: latency-svc-h4ndh May 21 16:01:11.247: INFO: Got endpoints: latency-svc-gqwmn [799.688925ms] May 21 16:01:11.256: INFO: Created: latency-svc-nqw4v May 21 16:01:11.297: INFO: Got endpoints: latency-svc-lncw9 [799.749605ms] May 21 16:01:11.303: INFO: Created: latency-svc-px8xs May 21 16:01:11.348: INFO: Got endpoints: latency-svc-n6kmd [800.439521ms] May 21 16:01:11.354: INFO: Created: latency-svc-gklst May 21 16:01:11.397: INFO: Got endpoints: latency-svc-6rc8s [799.669032ms] May 21 16:01:11.405: INFO: Created: latency-svc-w8bdd May 21 16:01:11.447: INFO: Got endpoints: latency-svc-n2xg2 [799.88737ms] May 21 16:01:11.456: INFO: Created: latency-svc-l4lh2 May 21 16:01:11.497: INFO: Got endpoints: latency-svc-rm97d [799.989699ms] May 21 16:01:11.504: INFO: Created: latency-svc-dvkh7 May 21 16:01:11.548: INFO: Got endpoints: latency-svc-tddfq [801.259343ms] May 21 16:01:11.556: INFO: Created: latency-svc-929pc May 21 16:01:11.596: INFO: Got endpoints: latency-svc-gqwrb [799.204882ms] May 21 16:01:11.603: INFO: Created: latency-svc-zr72w May 21 16:01:11.647: INFO: Got endpoints: latency-svc-zb789 [800.159059ms] May 21 16:01:11.654: INFO: Created: latency-svc-2q5f8 May 21 16:01:11.697: INFO: Got endpoints: latency-svc-4qzpk [799.440626ms] May 21 16:01:11.704: INFO: Created: latency-svc-zglnv May 21 16:01:11.747: INFO: Got endpoints: latency-svc-nldss [799.775113ms] May 21 16:01:11.754: INFO: Created: latency-svc-5wmxx May 21 16:01:11.797: INFO: Got endpoints: latency-svc-lnc2z [749.825459ms] May 21 16:01:11.804: INFO: Created: latency-svc-4m8tv May 21 16:01:11.847: INFO: Got endpoints: latency-svc-vq4zv [750.085134ms] May 21 16:01:11.854: INFO: Created: latency-svc-q5x7v May 21 16:01:11.897: INFO: Got endpoints: latency-svc-79h4m [749.822999ms] May 21 16:01:11.902: INFO: Created: latency-svc-p85f6 May 21 16:01:11.946: INFO: Got endpoints: latency-svc-h4ndh [749.34261ms] May 21 16:01:11.952: INFO: Created: latency-svc-m7sv8 May 21 16:01:11.996: INFO: Got endpoints: latency-svc-nqw4v [748.995515ms] May 21 16:01:12.002: INFO: Created: latency-svc-d8zdt May 21 16:01:12.047: INFO: Got endpoints: latency-svc-px8xs [749.815199ms] May 21 16:01:12.053: INFO: Created: latency-svc-mf6cs May 21 16:01:12.097: INFO: Got endpoints: latency-svc-gklst [748.880377ms] May 21 16:01:12.104: INFO: Created: latency-svc-v9ddv May 21 16:01:12.148: INFO: Got endpoints: latency-svc-w8bdd [750.725273ms] May 21 16:01:12.155: INFO: Created: latency-svc-g9ssk May 21 16:01:12.197: INFO: Got endpoints: latency-svc-l4lh2 [749.964141ms] May 21 16:01:12.205: INFO: Created: latency-svc-zk5lz May 21 16:01:12.248: INFO: Got endpoints: latency-svc-dvkh7 [750.436411ms] May 21 16:01:12.255: INFO: Created: latency-svc-fhdkf May 21 16:01:12.297: INFO: Got endpoints: latency-svc-929pc [749.059464ms] May 21 16:01:12.304: INFO: Created: latency-svc-rgjzt May 21 16:01:12.353: INFO: Got endpoints: latency-svc-zr72w [756.519799ms] May 21 16:01:12.359: INFO: Created: latency-svc-hrzw2 May 21 16:01:12.397: INFO: Got endpoints: latency-svc-2q5f8 [749.54249ms] May 21 16:01:12.403: INFO: Created: latency-svc-6z22d May 21 16:01:12.447: INFO: Got endpoints: latency-svc-zglnv [750.130926ms] May 21 16:01:12.455: INFO: Created: latency-svc-48j8l May 21 16:01:12.498: INFO: Got endpoints: latency-svc-5wmxx [750.662856ms] May 21 16:01:12.505: INFO: Created: latency-svc-4vpbj May 21 16:01:12.547: INFO: Got endpoints: latency-svc-4m8tv [749.963871ms] May 21 16:01:12.553: INFO: Created: latency-svc-4m5qc May 21 16:01:12.597: INFO: Got endpoints: latency-svc-q5x7v [749.920976ms] May 21 16:01:12.604: INFO: Created: latency-svc-n6szp May 21 16:01:12.647: INFO: Got endpoints: latency-svc-p85f6 [750.444613ms] May 21 16:01:12.654: INFO: Created: latency-svc-wntbj May 21 16:01:12.697: INFO: Got endpoints: latency-svc-m7sv8 [750.56211ms] May 21 16:01:12.704: INFO: Created: latency-svc-4g9f2 May 21 16:01:12.748: INFO: Got endpoints: latency-svc-d8zdt [751.470975ms] May 21 16:01:12.755: INFO: Created: latency-svc-97cqp May 21 16:01:12.797: INFO: Got endpoints: latency-svc-mf6cs [750.35657ms] May 21 16:01:12.804: INFO: Created: latency-svc-crn8z May 21 16:01:12.847: INFO: Got endpoints: latency-svc-v9ddv [750.205122ms] May 21 16:01:12.854: INFO: Created: latency-svc-tjvgb May 21 16:01:12.897: INFO: Got endpoints: latency-svc-g9ssk [749.330595ms] May 21 16:01:12.905: INFO: Created: latency-svc-xbfxk May 21 16:01:12.948: INFO: Got endpoints: latency-svc-zk5lz [749.940481ms] May 21 16:01:12.956: INFO: Created: latency-svc-mckwm May 21 16:01:13.002: INFO: Got endpoints: latency-svc-fhdkf [753.82156ms] May 21 16:01:13.009: INFO: Created: latency-svc-x8ltt May 21 16:01:13.047: INFO: Got endpoints: latency-svc-rgjzt [749.975829ms] May 21 16:01:13.054: INFO: Created: latency-svc-mjplw May 21 16:01:13.148: INFO: Got endpoints: latency-svc-hrzw2 [794.756598ms] May 21 16:01:13.154: INFO: Created: latency-svc-mn9sl May 21 16:01:13.197: INFO: Got endpoints: latency-svc-6z22d [799.714835ms] May 21 16:01:13.203: INFO: Created: latency-svc-5kcc9 May 21 16:01:13.247: INFO: Got endpoints: latency-svc-48j8l [800.029535ms] May 21 16:01:13.254: INFO: Created: latency-svc-lrd2b May 21 16:01:13.298: INFO: Got endpoints: latency-svc-4vpbj [799.895464ms] May 21 16:01:13.304: INFO: Created: latency-svc-ct9nh May 21 16:01:13.347: INFO: Got endpoints: latency-svc-4m5qc [800.218188ms] May 21 16:01:13.354: INFO: Created: latency-svc-q8v47 May 21 16:01:13.397: INFO: Got endpoints: latency-svc-n6szp [799.541292ms] May 21 16:01:13.410: INFO: Created: latency-svc-jsznm May 21 16:01:13.447: INFO: Got endpoints: latency-svc-wntbj [799.441811ms] May 21 16:01:13.454: INFO: Created: latency-svc-mv9dj May 21 16:01:13.497: INFO: Got endpoints: latency-svc-4g9f2 [799.893814ms] May 21 16:01:13.504: INFO: Created: latency-svc-pcpr9 May 21 16:01:13.547: INFO: Got endpoints: latency-svc-97cqp [799.30318ms] May 21 16:01:13.560: INFO: Created: latency-svc-7vvhs May 21 16:01:13.597: INFO: Got endpoints: latency-svc-crn8z [799.780307ms] May 21 16:01:13.603: INFO: Created: latency-svc-5csk9 May 21 16:01:13.647: INFO: Got endpoints: latency-svc-tjvgb [799.458852ms] May 21 16:01:13.653: INFO: Created: latency-svc-5vjgg May 21 16:01:13.697: INFO: Got endpoints: latency-svc-xbfxk [799.78121ms] May 21 16:01:13.704: INFO: Created: latency-svc-47l2s May 21 16:01:13.747: INFO: Got endpoints: latency-svc-mckwm [799.554427ms] May 21 16:01:13.755: INFO: Created: latency-svc-2wkzg May 21 16:01:13.798: INFO: Got endpoints: latency-svc-x8ltt [796.490811ms] May 21 16:01:13.806: INFO: Created: latency-svc-c64nx May 21 16:01:13.848: INFO: Got endpoints: latency-svc-mjplw [800.005935ms] May 21 16:01:13.856: INFO: Created: latency-svc-nvlb9 May 21 16:01:13.897: INFO: Got endpoints: latency-svc-mn9sl [749.159542ms] May 21 16:01:13.905: INFO: Created: latency-svc-gx6qz May 21 16:01:13.947: INFO: Got endpoints: latency-svc-5kcc9 [750.614254ms] May 21 16:01:13.955: INFO: Created: latency-svc-6l74b May 21 16:01:13.997: INFO: Got endpoints: latency-svc-lrd2b [749.637837ms] May 21 16:01:14.005: INFO: Created: latency-svc-fqtsw May 21 16:01:14.047: INFO: Got endpoints: latency-svc-ct9nh [749.429535ms] May 21 16:01:14.055: INFO: Created: latency-svc-vcxgp May 21 16:01:14.097: INFO: Got endpoints: latency-svc-q8v47 [750.004237ms] May 21 16:01:14.104: INFO: Created: latency-svc-9rgrr May 21 16:01:14.148: INFO: Got endpoints: latency-svc-jsznm [751.415583ms] May 21 16:01:14.156: INFO: Created: latency-svc-dql66 May 21 16:01:14.198: INFO: Got endpoints: latency-svc-mv9dj [751.204953ms] May 21 16:01:14.206: INFO: Created: latency-svc-j8p58 May 21 16:01:14.248: INFO: Got endpoints: latency-svc-pcpr9 [750.599251ms] May 21 16:01:14.255: INFO: Created: latency-svc-xl9sx May 21 16:01:14.297: INFO: Got endpoints: latency-svc-7vvhs [750.028744ms] May 21 16:01:14.310: INFO: Created: latency-svc-kfzkz May 21 16:01:14.347: INFO: Got endpoints: latency-svc-5csk9 [749.841089ms] May 21 16:01:14.354: INFO: Created: latency-svc-rctdw May 21 16:01:14.397: INFO: Got endpoints: latency-svc-5vjgg [750.104691ms] May 21 16:01:14.404: INFO: Created: latency-svc-28j2q May 21 16:01:14.447: INFO: Got endpoints: latency-svc-47l2s [750.137012ms] May 21 16:01:14.454: INFO: Created: latency-svc-fppdn May 21 16:01:14.497: INFO: Got endpoints: latency-svc-2wkzg [749.605716ms] May 21 16:01:14.503: INFO: Created: latency-svc-hd5kc May 21 16:01:14.547: INFO: Got endpoints: latency-svc-c64nx [748.27646ms] May 21 16:01:14.553: INFO: Created: latency-svc-qxjm6 May 21 16:01:14.597: INFO: Got endpoints: latency-svc-nvlb9 [749.650987ms] May 21 16:01:14.605: INFO: Created: latency-svc-s6grx May 21 16:01:14.647: INFO: Got endpoints: latency-svc-gx6qz [749.901167ms] May 21 16:01:14.655: INFO: Created: latency-svc-5d2pn May 21 16:01:14.697: INFO: Got endpoints: latency-svc-6l74b [749.308812ms] May 21 16:01:14.703: INFO: Created: latency-svc-pj4hw May 21 16:01:14.747: INFO: Got endpoints: latency-svc-fqtsw [750.042025ms] May 21 16:01:14.754: INFO: Created: latency-svc-m24tk May 21 16:01:14.798: INFO: Got endpoints: latency-svc-vcxgp [750.374801ms] May 21 16:01:14.805: INFO: Created: latency-svc-s6sd4 May 21 16:01:14.847: INFO: Got endpoints: latency-svc-9rgrr [749.534157ms] May 21 16:01:14.855: INFO: Created: latency-svc-7gbr2 May 21 16:01:14.897: INFO: Got endpoints: latency-svc-dql66 [748.965061ms] May 21 16:01:14.909: INFO: Created: latency-svc-kbhjt May 21 16:01:14.947: INFO: Got endpoints: latency-svc-j8p58 [749.205518ms] May 21 16:01:14.955: INFO: Created: latency-svc-c4fkq May 21 16:01:15.047: INFO: Got endpoints: latency-svc-xl9sx [799.685006ms] May 21 16:01:15.054: INFO: Created: latency-svc-6z7rl May 21 16:01:15.097: INFO: Got endpoints: latency-svc-kfzkz [799.858255ms] May 21 16:01:15.103: INFO: Created: latency-svc-dbbpc May 21 16:01:15.147: INFO: Got endpoints: latency-svc-rctdw [800.615908ms] May 21 16:01:15.155: INFO: Created: latency-svc-fb5lb May 21 16:01:15.198: INFO: Got endpoints: latency-svc-28j2q [800.727502ms] May 21 16:01:15.205: INFO: Created: latency-svc-t6k52 May 21 16:01:15.247: INFO: Got endpoints: latency-svc-fppdn [799.932374ms] May 21 16:01:15.253: INFO: Created: latency-svc-mlc6b May 21 16:01:15.298: INFO: Got endpoints: latency-svc-hd5kc [800.658774ms] May 21 16:01:15.305: INFO: Created: latency-svc-pmjlb May 21 16:01:15.347: INFO: Got endpoints: latency-svc-qxjm6 [799.967211ms] May 21 16:01:15.353: INFO: Created: latency-svc-jc45s May 21 16:01:15.397: INFO: Got endpoints: latency-svc-s6grx [799.419302ms] May 21 16:01:15.404: INFO: Created: latency-svc-zdxh9 May 21 16:01:15.447: INFO: Got endpoints: latency-svc-5d2pn [800.104964ms] May 21 16:01:15.454: INFO: Created: latency-svc-4k5sh May 21 16:01:15.497: INFO: Got endpoints: latency-svc-pj4hw [799.91817ms] May 21 16:01:15.507: INFO: Created: latency-svc-4kwmh May 21 16:01:15.548: INFO: Got endpoints: latency-svc-m24tk [800.437637ms] May 21 16:01:15.554: INFO: Created: latency-svc-drghq May 21 16:01:15.597: INFO: Got endpoints: latency-svc-s6sd4 [799.801537ms] May 21 16:01:15.605: INFO: Created: latency-svc-xwx8t May 21 16:01:15.648: INFO: Got endpoints: latency-svc-7gbr2 [800.538837ms] May 21 16:01:15.655: INFO: Created: latency-svc-bhq7k May 21 16:01:15.697: INFO: Got endpoints: latency-svc-kbhjt [799.954306ms] May 21 16:01:15.704: INFO: Created: latency-svc-9fmql May 21 16:01:15.747: INFO: Got endpoints: latency-svc-c4fkq [799.601566ms] May 21 16:01:15.754: INFO: Created: latency-svc-vjg2t May 21 16:01:15.797: INFO: Got endpoints: latency-svc-6z7rl [749.581439ms] May 21 16:01:15.804: INFO: Created: latency-svc-ts8nj May 21 16:01:15.847: INFO: Got endpoints: latency-svc-dbbpc [749.864559ms] May 21 16:01:15.854: INFO: Created: latency-svc-mz5g9 May 21 16:01:15.897: INFO: Got endpoints: latency-svc-fb5lb [749.612732ms] May 21 16:01:15.905: INFO: Created: latency-svc-qkx5v May 21 16:01:15.948: INFO: Got endpoints: latency-svc-t6k52 [750.26299ms] May 21 16:01:15.955: INFO: Created: latency-svc-g7264 May 21 16:01:16.048: INFO: Got endpoints: latency-svc-mlc6b [800.683168ms] May 21 16:01:16.055: INFO: Created: latency-svc-x44z2 May 21 16:01:16.097: INFO: Got endpoints: latency-svc-pmjlb [799.583242ms] May 21 16:01:16.103: INFO: Created: latency-svc-hmpdf May 21 16:01:16.148: INFO: Got endpoints: latency-svc-jc45s [801.183432ms] May 21 16:01:16.155: INFO: Created: latency-svc-ngh5m May 21 16:01:16.198: INFO: Got endpoints: latency-svc-zdxh9 [800.957315ms] May 21 16:01:16.204: INFO: Created: latency-svc-wxwgw May 21 16:01:16.247: INFO: Got endpoints: latency-svc-4k5sh [800.287177ms] May 21 16:01:16.255: INFO: Created: latency-svc-2bzvx May 21 16:01:16.297: INFO: Got endpoints: latency-svc-4kwmh [800.320509ms] May 21 16:01:16.305: INFO: Created: latency-svc-f5ftd May 21 16:01:16.347: INFO: Got endpoints: latency-svc-drghq [799.71214ms] May 21 16:01:16.355: INFO: Created: latency-svc-b66g9 May 21 16:01:16.397: INFO: Got endpoints: latency-svc-xwx8t [799.812065ms] May 21 16:01:16.404: INFO: Created: latency-svc-r624j May 21 16:01:16.447: INFO: Got endpoints: latency-svc-bhq7k [799.023093ms] May 21 16:01:16.453: INFO: Created: latency-svc-jwwdx May 21 16:01:16.498: INFO: Got endpoints: latency-svc-9fmql [800.498927ms] May 21 16:01:16.504: INFO: Created: latency-svc-j8pgs May 21 16:01:16.547: INFO: Got endpoints: latency-svc-vjg2t [799.962598ms] May 21 16:01:16.553: INFO: Created: latency-svc-bhwmn May 21 16:01:16.598: INFO: Got endpoints: latency-svc-ts8nj [800.503883ms] May 21 16:01:16.606: INFO: Created: latency-svc-8tbdc May 21 16:01:16.647: INFO: Got endpoints: latency-svc-mz5g9 [799.718522ms] May 21 16:01:16.653: INFO: Created: latency-svc-z692z May 21 16:01:16.697: INFO: Got endpoints: latency-svc-qkx5v [799.919235ms] May 21 16:01:16.704: INFO: Created: latency-svc-fpkpl May 21 16:01:16.747: INFO: Got endpoints: latency-svc-g7264 [799.161432ms] May 21 16:01:16.755: INFO: Created: latency-svc-8hc27 May 21 16:01:16.797: INFO: Got endpoints: latency-svc-x44z2 [749.318823ms] May 21 16:01:16.805: INFO: Created: latency-svc-ssv8x May 21 16:01:16.897: INFO: Got endpoints: latency-svc-hmpdf [799.856837ms] May 21 16:01:16.904: INFO: Created: latency-svc-fqmxm May 21 16:01:16.947: INFO: Got endpoints: latency-svc-ngh5m [799.035446ms] May 21 16:01:16.955: INFO: Created: latency-svc-qks5r May 21 16:01:16.998: INFO: Got endpoints: latency-svc-wxwgw [799.807895ms] May 21 16:01:17.005: INFO: Created: latency-svc-psbhl May 21 16:01:17.048: INFO: Got endpoints: latency-svc-2bzvx [800.380665ms] May 21 16:01:17.063: INFO: Created: latency-svc-jsnhz May 21 16:01:17.098: INFO: Got endpoints: latency-svc-f5ftd [800.429015ms] May 21 16:01:17.113: INFO: Created: latency-svc-fkm9p May 21 16:01:17.148: INFO: Got endpoints: latency-svc-b66g9 [800.152206ms] May 21 16:01:17.156: INFO: Created: latency-svc-q59kj May 21 16:01:17.198: INFO: Got endpoints: latency-svc-r624j [800.313657ms] May 21 16:01:17.206: INFO: Created: latency-svc-gmnm2 May 21 16:01:17.248: INFO: Got endpoints: latency-svc-jwwdx [800.571957ms] May 21 16:01:17.255: INFO: Created: latency-svc-t4jx2 May 21 16:01:17.297: INFO: Got endpoints: latency-svc-j8pgs [798.980367ms] May 21 16:01:17.304: INFO: Created: latency-svc-fm6s7 May 21 16:01:17.346: INFO: Got endpoints: latency-svc-bhwmn [799.231055ms] May 21 16:01:17.353: INFO: Created: latency-svc-hvsr9 May 21 16:01:17.447: INFO: Got endpoints: latency-svc-8tbdc [848.992233ms] May 21 16:01:17.453: INFO: Created: latency-svc-bv9b8 May 21 16:01:17.497: INFO: Got endpoints: latency-svc-z692z [850.672669ms] May 21 16:01:17.505: INFO: Created: latency-svc-ntpmj May 21 16:01:17.547: INFO: Got endpoints: latency-svc-fpkpl [850.23629ms] May 21 16:01:17.555: INFO: Created: latency-svc-mkzxl May 21 16:01:17.597: INFO: Got endpoints: latency-svc-8hc27 [849.674508ms] May 21 16:01:17.603: INFO: Created: latency-svc-cwjmg May 21 16:01:17.647: INFO: Got endpoints: latency-svc-ssv8x [850.059068ms] May 21 16:01:17.697: INFO: Got endpoints: latency-svc-fqmxm [799.9427ms] May 21 16:01:17.748: INFO: Got endpoints: latency-svc-qks5r [800.399763ms] May 21 16:01:17.798: INFO: Got endpoints: latency-svc-psbhl [799.805764ms] May 21 16:01:17.847: INFO: Got endpoints: latency-svc-jsnhz [799.350818ms] May 21 16:01:17.897: INFO: Got endpoints: latency-svc-fkm9p [799.837216ms] May 21 16:01:17.947: INFO: Got endpoints: latency-svc-q59kj [799.559282ms] May 21 16:01:17.998: INFO: Got endpoints: latency-svc-gmnm2 [799.77965ms] May 21 16:01:18.047: INFO: Got endpoints: latency-svc-t4jx2 [799.58481ms] May 21 16:01:18.097: INFO: Got endpoints: latency-svc-fm6s7 [800.387769ms] May 21 16:01:18.147: INFO: Got endpoints: latency-svc-hvsr9 [800.442305ms] May 21 16:01:18.198: INFO: Got endpoints: latency-svc-bv9b8 [750.895173ms] May 21 16:01:18.247: INFO: Got endpoints: latency-svc-ntpmj [749.892985ms] May 21 16:01:18.297: INFO: Got endpoints: latency-svc-mkzxl [749.028011ms] May 21 16:01:18.348: INFO: Got endpoints: latency-svc-cwjmg [751.248032ms] May 21 16:01:18.348: INFO: Latencies: [6.848045ms 9.53634ms 11.971807ms 15.015499ms 17.026451ms 18.866955ms 21.160007ms 23.179066ms 25.508217ms 28.041046ms 31.492826ms 32.975657ms 33.090205ms 33.561809ms 33.724656ms 34.096026ms 34.191004ms 34.527556ms 34.611383ms 34.729131ms 34.857246ms 34.87723ms 34.943338ms 35.100741ms 35.835901ms 35.908007ms 36.827924ms 39.98558ms 40.830738ms 43.442161ms 53.346818ms 103.000205ms 150.039556ms 198.484187ms 244.971933ms 293.898601ms 342.070569ms 390.282877ms 437.637913ms 483.198725ms 530.767874ms 578.580685ms 627.156302ms 674.323023ms 723.451322ms 748.27646ms 748.880377ms 748.965061ms 748.995515ms 749.028011ms 749.059464ms 749.159542ms 749.205518ms 749.308812ms 749.318823ms 749.330595ms 749.34261ms 749.354123ms 749.429535ms 749.507288ms 749.534157ms 749.54249ms 749.581439ms 749.605554ms 749.605716ms 749.612732ms 749.637837ms 749.641725ms 749.650987ms 749.656032ms 749.733597ms 749.815199ms 749.822999ms 749.825459ms 749.841089ms 749.864559ms 749.891531ms 749.892985ms 749.901167ms 749.920976ms 749.926586ms 749.940481ms 749.963871ms 749.964141ms 749.975829ms 750.004237ms 750.028744ms 750.042025ms 750.085134ms 750.104691ms 750.130926ms 750.137012ms 750.148514ms 750.202216ms 750.205122ms 750.26299ms 750.267367ms 750.35657ms 750.374801ms 750.436411ms 750.444613ms 750.56211ms 750.599251ms 750.614254ms 750.662856ms 750.725273ms 750.895173ms 750.977726ms 751.204953ms 751.248032ms 751.415583ms 751.45439ms 751.470975ms 753.82156ms 756.519799ms 794.756598ms 796.490811ms 798.980367ms 799.023093ms 799.035446ms 799.161432ms 799.204882ms 799.231055ms 799.30318ms 799.350818ms 799.419302ms 799.440626ms 799.441811ms 799.458852ms 799.541292ms 799.554427ms 799.559282ms 799.583242ms 799.58481ms 799.585162ms 799.601566ms 799.652306ms 799.669032ms 799.685006ms 799.688925ms 799.71214ms 799.714835ms 799.718522ms 799.749605ms 799.775113ms 799.77965ms 799.780307ms 799.78121ms 799.801537ms 799.805764ms 799.807895ms 799.812065ms 799.837216ms 799.848352ms 799.856837ms 799.858255ms 799.88737ms 799.893814ms 799.895464ms 799.91817ms 799.919235ms 799.932374ms 799.9427ms 799.954306ms 799.962598ms 799.967211ms 799.989699ms 800.005935ms 800.029535ms 800.104964ms 800.152206ms 800.159059ms 800.218188ms 800.287177ms 800.313657ms 800.320509ms 800.380665ms 800.387769ms 800.399763ms 800.429015ms 800.437637ms 800.439521ms 800.442305ms 800.498927ms 800.503883ms 800.538837ms 800.571957ms 800.602934ms 800.615908ms 800.658774ms 800.683168ms 800.727502ms 800.957315ms 801.183432ms 801.259343ms 848.992233ms 849.674508ms 850.059068ms 850.23629ms 850.672669ms] May 21 16:01:18.348: INFO: 50 %ile: 750.444613ms May 21 16:01:18.349: INFO: 90 %ile: 800.437637ms May 21 16:01:18.349: INFO: 99 %ile: 850.23629ms May 21 16:01:18.349: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:18.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2745" for this suite. • [SLOW TEST:10.073 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":17,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:16.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-0bee5169-b556-47c5-8f75-c3e03101fec3 STEP: Creating a pod to test consume configMaps May 21 16:01:16.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224" in namespace "configmap-7457" to be "Succeeded or Failed" May 21 16:01:16.904: INFO: Pod "pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740327ms May 21 16:01:18.908: INFO: Pod "pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007120964s STEP: Saw pod success May 21 16:01:18.909: INFO: Pod "pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224" satisfied condition "Succeeded or Failed" May 21 16:01:18.911: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224 container configmap-volume-test: STEP: delete the pod May 21 16:01:18.927: INFO: Waiting for pod pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224 to disappear May 21 16:01:18.929: INFO: Pod pod-configmaps-f12c178b-0886-4dac-b15d-7a868d0d6224 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:18.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7457" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:12.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-9371 STEP: creating replication controller nodeport-test in namespace services-9371 I0521 16:01:12.343647 23 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9371, replica count: 2 I0521 16:01:15.394192 23 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:01:15.394: INFO: Creating new exec pod May 21 16:01:20.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-9371 exec execpods2hz8 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 21 16:01:20.652: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 21 16:01:20.652: INFO: stdout: "" May 21 16:01:20.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-9371 exec execpods2hz8 -- /bin/sh -x -c nc -zv -t -w 2 10.96.142.122 80' May 21 16:01:20.901: INFO: stderr: "+ nc -zv -t -w 2 10.96.142.122 80\nConnection to 10.96.142.122 80 port [tcp/http] succeeded!\n" May 21 16:01:20.901: INFO: stdout: "" May 21 16:01:20.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-9371 exec execpods2hz8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.2 30208' May 21 16:01:21.144: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.2 30208\nConnection to 172.18.0.2 30208 port [tcp/30208] succeeded!\n" May 21 16:01:21.144: INFO: stdout: "" May 21 16:01:21.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-9371 exec execpods2hz8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 30208' May 21 16:01:21.335: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 30208\nConnection to 172.18.0.4 30208 port [tcp/30208] succeeded!\n" May 21 16:01:21.335: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:21.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9371" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.050 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":12,"skipped":227,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:21.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:21.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1393" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:18.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ca0a944a-9dff-4c58-ade1-256844c093c9 STEP: Creating a pod to test consume configMaps May 21 16:01:19.023: INFO: Waiting up to 5m0s for pod "pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52" in namespace "configmap-2567" to be "Succeeded or Failed" May 21 16:01:19.026: INFO: Pod "pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593579ms May 21 16:01:21.030: INFO: Pod "pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52": Phase="Running", Reason="", readiness=true. Elapsed: 2.005969029s May 21 16:01:23.033: INFO: Pod "pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009403371s STEP: Saw pod success May 21 16:01:23.033: INFO: Pod "pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52" satisfied condition "Succeeded or Failed" May 21 16:01:23.037: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52 container configmap-volume-test: STEP: delete the pod May 21 16:01:23.053: INFO: Waiting for pod pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52 to disappear May 21 16:01:23.059: INFO: Pod pod-configmaps-8579a66d-78e7-4886-8fc9-16eaf63f5f52 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:23.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2567" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":424,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:16.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8629 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8629 I0521 16:01:16.606681 31 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8629, replica count: 2 I0521 16:01:19.657270 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:01:19.657: INFO: Creating new exec pod May 21 16:01:24.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8629 exec execpodc5mqf -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 21 16:01:24.961: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 21 16:01:24.961: INFO: stdout: "" May 21 16:01:24.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8629 exec execpodc5mqf -- /bin/sh -x -c nc -zv -t -w 2 10.96.209.237 80' May 21 16:01:25.189: INFO: stderr: "+ nc -zv -t -w 2 10.96.209.237 80\nConnection to 10.96.209.237 80 port [tcp/http] succeeded!\n" May 21 16:01:25.189: INFO: stdout: "" May 21 16:01:25.189: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:25.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8629" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.643 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":21,"skipped":424,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:23.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 21 16:01:25.175: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2784 PodName:pod-sharedvolume-44b8fec4-cdf5-4506-a7d2-727b5898266c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:01:25.175: INFO: >>> kubeConfig: /root/.kube/config May 21 16:01:25.240: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:25.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2784" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":24,"skipped":296,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:18.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:01:19.602: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:01:21.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209679, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209679, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209679, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209679, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:01:24.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 21 16:01:25.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 21 16:01:26.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:26.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2901-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:27.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9859" for this suite. STEP: Destroying namespace "webhook-9859-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":18,"skipped":335,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:59:25.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:25.887: INFO: Deleting pod "var-expansion-c1a3292b-a902-403d-b150-ceca61e21ce5" in namespace "var-expansion-377" May 21 16:01:25.890: INFO: Wait up to 5m0s for pod "var-expansion-c1a3292b-a902-403d-b150-ceca61e21ce5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:27.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-377" for this suite. • [SLOW TEST:122.063 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":17,"skipped":208,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:25.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-c49ee3eb-157e-4b17-9216-14c41b5ccd51 STEP: Creating secret with name secret-projected-all-test-volume-8ef3b02b-ee82-47dd-bb06-1a2a40b74481 STEP: Creating a pod to test Check all projections for projected volume plugin May 21 16:01:25.283: INFO: Waiting up to 5m0s for pod "projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3" in namespace "projected-1369" to be "Succeeded or Failed" May 21 16:01:25.285: INFO: Pod "projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.805498ms May 21 16:01:27.288: INFO: Pod "projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004870193s May 21 16:01:29.291: INFO: Pod "projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008178366s STEP: Saw pod success May 21 16:01:29.291: INFO: Pod "projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3" satisfied condition "Succeeded or Failed" May 21 16:01:29.294: INFO: Trying to get logs from node kali-worker2 pod projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3 container projected-all-volume-test: STEP: delete the pod May 21 16:01:29.305: INFO: Waiting for pod projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3 to disappear May 21 16:01:29.308: INFO: Pod projected-volume-c5cdc1bb-1e6c-46d1-853a-e162a5d394b3 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:29.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1369" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":298,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:29.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 16:01:29.357: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:33.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5250" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":26,"skipped":306,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:27.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1546 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 21 16:01:27.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-509 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 21 16:01:27.883: INFO: stderr: "" May 21 16:01:27.883: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 21 16:01:32.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-509 get pod e2e-test-httpd-pod -o json' May 21 16:01:33.051: INFO: stderr: "" May 21 16:01:33.051: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.143\\\"\\n ],\\n \\\"mac\\\": \\\"12:78:1e:14:57:69\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.143\\\"\\n ],\\n \\\"mac\\\": \\\"12:78:1e:14:57:69\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"creationTimestamp\": \"2021-05-21T16:01:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-21T16:01:27Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:k8s.v1.cni.cncf.io/network-status\": {},\n \"f:k8s.v1.cni.cncf.io/networks-status\": {}\n }\n }\n },\n \"manager\": \"multus\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-21T16:01:28Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.143\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-21T16:01:29Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-509\",\n \"resourceVersion\": \"21840\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-509/pods/e2e-test-httpd-pod\",\n \"uid\": \"cc5cecef-5cb5-47f5-81cc-a66b2ff4b7c9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sn947\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sn947\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sn947\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T16:01:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T16:01:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T16:01:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T16:01:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://821909c322b57299553840c1abb515bf1ee5e94f9a312df2fb4484a41fc55a0f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T16:01:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.143\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.143\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-05-21T16:01:27Z\"\n }\n}\n" STEP: replace the image in the pod May 21 16:01:33.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-509 replace -f -' May 21 16:01:33.398: INFO: stderr: "" May 21 16:01:33.398: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 May 21 16:01:33.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-509 delete pods e2e-test-httpd-pod' May 21 16:01:40.209: INFO: stderr: "" May 21 16:01:40.209: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:40.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-509" for this suite. • [SLOW TEST:12.475 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":19,"skipped":348,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:25.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:25.242: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Pending, waiting for it to be Running (with Ready = true) May 21 16:01:27.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Pending, waiting for it to be Running (with Ready = true) May 21 16:01:29.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:31.245: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:33.245: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:35.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:37.247: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:39.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:41.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = false) May 21 16:01:43.246: INFO: The status of Pod test-webserver-ae78aa99-cca6-4acb-aee4-4a4fd8e8f17c is Running (Ready = true) May 21 16:01:43.249: INFO: Container started at 2021-05-21 16:01:26 +0000 UTC, pod became ready at 2021-05-21 16:01:41 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:43.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5792" for this suite. • [SLOW TEST:18.048 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:33.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 21 16:01:33.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 21988 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:01:33.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 21989 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:01:33.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 21990 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 21 16:01:43.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 22165 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:01:43.453: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 22166 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:01:43.454: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9513 /api/v1/namespaces/watch-9513/configmaps/e2e-watch-test-label-changed a8353975-98bb-4a15-9049-e26496fad310 22167 0 2021-05-21 16:01:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-21 16:01:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:43.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9513" for this suite. • [SLOW TEST:10.078 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":27,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:40.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 21 16:01:42.790: INFO: Successfully updated pod "adopt-release-7nv5g" STEP: Checking that the Job readopts the Pod May 21 16:01:42.790: INFO: Waiting up to 15m0s for pod "adopt-release-7nv5g" in namespace "job-3934" to be "adopted" May 21 16:01:42.793: INFO: Pod "adopt-release-7nv5g": Phase="Running", Reason="", readiness=true. Elapsed: 2.548347ms May 21 16:01:44.795: INFO: Pod "adopt-release-7nv5g": Phase="Running", Reason="", readiness=true. Elapsed: 2.005357823s May 21 16:01:44.796: INFO: Pod "adopt-release-7nv5g" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 21 16:01:45.305: INFO: Successfully updated pod "adopt-release-7nv5g" STEP: Checking that the Job releases the Pod May 21 16:01:45.305: INFO: Waiting up to 15m0s for pod "adopt-release-7nv5g" in namespace "job-3934" to be "released" May 21 16:01:45.308: INFO: Pod "adopt-release-7nv5g": Phase="Running", Reason="", readiness=true. Elapsed: 3.006307ms May 21 16:01:47.312: INFO: Pod "adopt-release-7nv5g": Phase="Running", Reason="", readiness=true. Elapsed: 2.006996261s May 21 16:01:47.313: INFO: Pod "adopt-release-7nv5g" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:47.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3934" for this suite. • [SLOW TEST:7.086 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":20,"skipped":357,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:27.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 21 16:01:27.956: INFO: >>> kubeConfig: /root/.kube/config May 21 16:01:32.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:48.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7284" for this suite. • [SLOW TEST:21.023 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":18,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:48.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:49.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2809 version' May 21 16:01:49.176: INFO: stderr: "" May 21 16:01:49.176: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.11\", GitCommit:\"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T12:27:07Z\", GoVersion:\"go1.15.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.11\", GitCommit:\"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33\", GitTreeState:\"clean\", BuildDate:\"2021-05-18T09:41:02Z\", GoVersion:\"go1.15.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:49.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2809" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":19,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:47.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:01:47.357: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:49.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2913" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":362,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:43.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 21 16:01:44.796: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:01:44.811: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:01:46.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209704, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209704, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209704, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209704, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:01:49.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:49.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3635" for this suite. STEP: Destroying namespace "webhook-3635-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.491 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":28,"skipped":361,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:43.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:01:44.379: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:01:47.394: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 21 16:01:50.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=webhook-2443 attach --namespace=webhook-2443 to-be-attached-pod -i -c=container1' May 21 16:01:50.594: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:50.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2443" for this suite. STEP: Destroying namespace "webhook-2443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.335 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":23,"skipped":451,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:49.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-61d4dcf4-9b19-48bb-850d-00b8a13e7f4a STEP: Creating a pod to test consume configMaps May 21 16:01:49.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43" in namespace "configmap-2201" to be "Succeeded or Failed" May 21 16:01:49.265: INFO: Pod "pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053024ms May 21 16:01:51.269: INFO: Pod "pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005809712s STEP: Saw pod success May 21 16:01:51.269: INFO: Pod "pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43" satisfied condition "Succeeded or Failed" May 21 16:01:51.272: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43 container configmap-volume-test: STEP: delete the pod May 21 16:01:51.289: INFO: Waiting for pod pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43 to disappear May 21 16:01:51.292: INFO: Pod pod-configmaps-44963101-3950-48a5-abd2-f49c3a8aea43 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:51.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2201" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":269,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:50.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 21 16:01:50.070: INFO: Waiting up to 5m0s for pod "pod-a3a8599c-b72b-4c06-b599-1696f09cc29e" in namespace "emptydir-4583" to be "Succeeded or Failed" May 21 16:01:50.083: INFO: Pod "pod-a3a8599c-b72b-4c06-b599-1696f09cc29e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.274422ms May 21 16:01:52.087: INFO: Pod "pod-a3a8599c-b72b-4c06-b599-1696f09cc29e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01725029s STEP: Saw pod success May 21 16:01:52.087: INFO: Pod "pod-a3a8599c-b72b-4c06-b599-1696f09cc29e" satisfied condition "Succeeded or Failed" May 21 16:01:52.090: INFO: Trying to get logs from node kali-worker pod pod-a3a8599c-b72b-4c06-b599-1696f09cc29e container test-container: STEP: delete the pod May 21 16:01:52.105: INFO: Waiting for pod pod-a3a8599c-b72b-4c06-b599-1696f09cc29e to disappear May 21 16:01:52.108: INFO: Pod pod-a3a8599c-b72b-4c06-b599-1696f09cc29e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:52.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4583" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:53.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5760 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5760 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5760 May 21 16:00:53.701: INFO: Found 0 stateful pods, waiting for 1 May 21 16:01:03.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 21 16:01:03.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:03.967: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:03.967: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:03.967: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:03.971: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 21 16:01:13.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:13.975: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:01:13.989: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:13.989: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:13.989: INFO: May 21 16:01:13.989: INFO: StatefulSet ss has not reached scale 3, at 1 May 21 16:01:14.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996638973s May 21 16:01:15.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992598238s May 21 16:01:17.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987991781s May 21 16:01:18.007: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983270158s May 21 16:01:19.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978784251s May 21 16:01:20.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974443068s May 21 16:01:21.020: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96935797s May 21 16:01:22.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965419582s May 21 16:01:23.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.700787ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5760 May 21 16:01:24.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:01:24.233: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:01:24.233: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:01:24.233: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:01:24.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:01:24.469: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 21 16:01:24.469: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:01:24.469: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:01:24.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:01:24.712: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 21 16:01:24.712: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:01:24.712: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:01:24.715: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 21 16:01:34.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 21 16:01:34.720: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 21 16:01:34.720: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 21 16:01:34.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:34.966: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:34.966: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:34.966: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:34.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:35.210: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:35.210: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:35.210: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:35.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5760 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:35.443: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:35.443: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:35.443: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:35.443: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:01:35.446: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 21 16:01:45.452: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:45.452: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:45.452: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:45.463: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:45.463: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:45.463: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:13 +0000 UTC }] May 21 16:01:45.463: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC }] May 21 16:01:45.463: INFO: May 21 16:01:45.463: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 16:01:46.467: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:46.467: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:46.467: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:13 +0000 UTC }] May 21 16:01:46.467: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC }] May 21 16:01:46.467: INFO: May 21 16:01:46.467: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 16:01:47.472: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:47.472: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:47.472: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:13 +0000 UTC }] May 21 16:01:47.472: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC }] May 21 16:01:47.472: INFO: May 21 16:01:47.472: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 16:01:48.475: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:48.475: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:48.475: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:13 +0000 UTC }] May 21 16:01:48.475: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC }] May 21 16:01:48.475: INFO: May 21 16:01:48.475: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 16:01:49.483: INFO: POD NODE PHASE GRACE CONDITIONS May 21 16:01:49.483: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:00:53 +0000 UTC }] May 21 16:01:49.483: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:13 +0000 UTC }] May 21 16:01:49.483: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-21 16:01:14 +0000 UTC }] May 21 16:01:49.483: INFO: May 21 16:01:49.483: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 16:01:50.486: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.976435645s May 21 16:01:51.490: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.972966264s May 21 16:01:52.493: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.968956878s May 21 16:01:53.497: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.966361971s May 21 16:01:54.500: INFO: Verifying statefulset ss doesn't scale past 0 for another 962.708259ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5760 May 21 16:01:55.503: INFO: Scaling statefulset ss to 0 May 21 16:01:55.514: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 16:01:55.516: INFO: Deleting all statefulset in ns statefulset-5760 May 21 16:01:55.519: INFO: Scaling statefulset ss to 0 May 21 16:01:55.528: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:01:55.531: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:55.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5760" for this suite. • [SLOW TEST:61.884 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:51.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5182d5af-35a4-4781-b15b-2b88cfe4f306 STEP: Creating secret with name s-test-opt-upd-f3155f4b-d0fd-4d1f-968f-8374679b17bb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5182d5af-35a4-4781-b15b-2b88cfe4f306 STEP: Updating secret s-test-opt-upd-f3155f4b-d0fd-4d1f-968f-8374679b17bb STEP: Creating secret with name s-test-opt-create-604a2f71-c738-4932-b8ef-b035fb2a43b5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:59.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3983" for this suite. • [SLOW TEST:68.374 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":580,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:55.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 21 16:01:55.637: INFO: Waiting up to 5m0s for pod "client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f" in namespace "containers-3768" to be "Succeeded or Failed" May 21 16:01:55.639: INFO: Pod "client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.922842ms May 21 16:01:57.642: INFO: Pod "client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004875329s May 21 16:01:59.647: INFO: Pod "client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009119936s STEP: Saw pod success May 21 16:01:59.647: INFO: Pod "client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f" satisfied condition "Succeeded or Failed" May 21 16:01:59.649: INFO: Trying to get logs from node kali-worker2 pod client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f container test-container: STEP: delete the pod May 21 16:01:59.662: INFO: Waiting for pod client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f to disappear May 21 16:01:59.665: INFO: Pod client-containers-414af233-ca1c-4069-8ffa-bddc051f6f0f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:59.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3768" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:59.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:01:59.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9463" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:21.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 21 16:01:21.554: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 21 16:01:39.025: INFO: >>> kubeConfig: /root/.kube/config May 21 16:01:44.988: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:01.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7951" for this suite. • [SLOW TEST:39.529 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":14,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:01.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 21 16:02:03.179: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:03.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4336" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:59.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:02:01.506: INFO: Waiting up to 5m0s for pod "client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665" in namespace "pods-1219" to be "Succeeded or Failed" May 21 16:02:01.509: INFO: Pod "client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492259ms May 21 16:02:03.512: INFO: Pod "client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006065975s STEP: Saw pod success May 21 16:02:03.513: INFO: Pod "client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665" satisfied condition "Succeeded or Failed" May 21 16:02:03.516: INFO: Trying to get logs from node kali-worker2 pod client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665 container env3cont: STEP: delete the pod May 21 16:02:03.530: INFO: Waiting for pod client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665 to disappear May 21 16:02:03.533: INFO: Pod client-envvars-08dae646-7534-40d0-8198-b1e1fc92d665 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:03.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1219" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":587,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:59.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 21 16:02:02.373: INFO: Successfully updated pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f" May 21 16:02:02.373: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f" in namespace "pods-598" to be "terminated due to deadline exceeded" May 21 16:02:02.376: INFO: Pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f": Phase="Running", Reason="", readiness=true. Elapsed: 2.789806ms May 21 16:02:04.380: INFO: Pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f": Phase="Running", Reason="", readiness=true. Elapsed: 2.00658823s May 21 16:02:06.384: INFO: Pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.010935594s May 21 16:02:06.384: INFO: Pod "pod-update-activedeadlineseconds-76c174b2-55ce-4466-b15b-b556d2e0dc5f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:06.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-598" for this suite. • [SLOW TEST:6.577 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:03.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 21 16:02:03.602: INFO: Waiting up to 5m0s for pod "pod-9ef35c74-6379-4e8a-b651-48fca15d413e" in namespace "emptydir-6209" to be "Succeeded or Failed" May 21 16:02:03.606: INFO: Pod "pod-9ef35c74-6379-4e8a-b651-48fca15d413e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.720502ms May 21 16:02:05.609: INFO: Pod "pod-9ef35c74-6379-4e8a-b651-48fca15d413e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007240964s May 21 16:02:07.613: INFO: Pod "pod-9ef35c74-6379-4e8a-b651-48fca15d413e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010827165s STEP: Saw pod success May 21 16:02:07.613: INFO: Pod "pod-9ef35c74-6379-4e8a-b651-48fca15d413e" satisfied condition "Succeeded or Failed" May 21 16:02:07.616: INFO: Trying to get logs from node kali-worker2 pod pod-9ef35c74-6379-4e8a-b651-48fca15d413e container test-container: STEP: delete the pod May 21 16:02:07.631: INFO: Waiting for pod pod-9ef35c74-6379-4e8a-b651-48fca15d413e to disappear May 21 16:02:07.633: INFO: Pod pod-9ef35c74-6379-4e8a-b651-48fca15d413e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:07.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6209" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":108,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:06.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 16:02:06.432: INFO: Waiting up to 5m0s for pod "downward-api-b3898068-0f1d-4b49-a72f-58f98186948b" in namespace "downward-api-9651" to be "Succeeded or Failed" May 21 16:02:06.435: INFO: Pod "downward-api-b3898068-0f1d-4b49-a72f-58f98186948b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.656811ms May 21 16:02:08.439: INFO: Pod "downward-api-b3898068-0f1d-4b49-a72f-58f98186948b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006718833s STEP: Saw pod success May 21 16:02:08.439: INFO: Pod "downward-api-b3898068-0f1d-4b49-a72f-58f98186948b" satisfied condition "Succeeded or Failed" May 21 16:02:08.442: INFO: Trying to get logs from node kali-worker pod downward-api-b3898068-0f1d-4b49-a72f-58f98186948b container dapi-container: STEP: delete the pod May 21 16:02:08.459: INFO: Waiting for pod downward-api-b3898068-0f1d-4b49-a72f-58f98186948b to disappear May 21 16:02:08.461: INFO: Pod downward-api-b3898068-0f1d-4b49-a72f-58f98186948b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:08.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9651" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:08.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 21 16:02:08.573: INFO: Waiting up to 5m0s for pod "pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7" in namespace "emptydir-7543" to be "Succeeded or Failed" May 21 16:02:08.576: INFO: Pod "pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380997ms May 21 16:02:10.583: INFO: Pod "pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009391725s STEP: Saw pod success May 21 16:02:10.583: INFO: Pod "pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7" satisfied condition "Succeeded or Failed" May 21 16:02:10.587: INFO: Trying to get logs from node kali-worker pod pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7 container test-container: STEP: delete the pod May 21 16:02:10.600: INFO: Waiting for pod pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7 to disappear May 21 16:02:10.603: INFO: Pod pod-e344dc0b-d311-4d1b-9dbe-bbbd6e803af7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:10.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7543" for this suite. • ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:49.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 21 16:01:57.578: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:01:57.581: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:01:59.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:01:59.585: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:01.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:01.585: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:03.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:03.586: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:05.582: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:05.587: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:07.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:07.587: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:09.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:09.586: INFO: Pod pod-with-prestop-exec-hook still exists May 21 16:02:11.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 21 16:02:11.586: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:11.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2774" for this suite. • [SLOW TEST:22.086 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":371,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:13.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-b4183a18-a0aa-4065-bd2a-9d92df49a837 in namespace container-probe-5050 May 21 15:58:15.613: INFO: Started pod test-webserver-b4183a18-a0aa-4065-bd2a-9d92df49a837 in namespace container-probe-5050 STEP: checking the pod's current state and verifying that restartCount is present May 21 15:58:15.615: INFO: Initial restart count of pod test-webserver-b4183a18-a0aa-4065-bd2a-9d92df49a837 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:16.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5050" for this suite. • [SLOW TEST:242.474 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:16.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 21 16:02:16.134: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6341 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:16.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6341" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":4,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":145,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:10.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-6718 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6718 to expose endpoints map[] May 21 16:02:10.652: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 21 16:02:11.661: INFO: successfully validated that service endpoint-test2 in namespace services-6718 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6718 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6718 to expose endpoints map[pod1:[80]] May 21 16:02:15.681: INFO: successfully validated that service endpoint-test2 in namespace services-6718 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6718 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6718 to expose endpoints map[pod1:[80] pod2:[80]] May 21 16:02:17.701: INFO: successfully validated that service endpoint-test2 in namespace services-6718 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6718 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6718 to expose endpoints map[pod2:[80]] May 21 16:02:17.720: INFO: successfully validated that service endpoint-test2 in namespace services-6718 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6718 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6718 to expose endpoints map[] May 21 16:02:17.733: INFO: successfully validated that service endpoint-test2 in namespace services-6718 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:17.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6718" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:7.138 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":10,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:07.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1385 STEP: creating an pod May 21 16:02:07.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 21 16:02:07.935: INFO: stderr: "" May 21 16:02:07.935: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 21 16:02:07.935: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 21 16:02:07.936: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8120" to be "running and ready, or succeeded" May 21 16:02:07.939: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248918ms May 21 16:02:09.942: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.006692786s May 21 16:02:09.942: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 21 16:02:09.942: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 21 16:02:09.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator' May 21 16:02:10.081: INFO: stderr: "" May 21 16:02:10.081: INFO: stdout: "I0521 16:02:08.817589 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xwt 516\nI0521 16:02:09.017793 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/q9z 409\nI0521 16:02:09.217902 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/t5vf 580\nI0521 16:02:09.417669 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/zsz 354\nI0521 16:02:09.617851 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/q6mv 324\nI0521 16:02:09.817862 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/dgl 235\nI0521 16:02:10.017824 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/k5gv 375\n" May 21 16:02:12.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator' May 21 16:02:12.230: INFO: stderr: "" May 21 16:02:12.230: INFO: stdout: "I0521 16:02:08.817589 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xwt 516\nI0521 16:02:09.017793 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/q9z 409\nI0521 16:02:09.217902 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/t5vf 580\nI0521 16:02:09.417669 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/zsz 354\nI0521 16:02:09.617851 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/q6mv 324\nI0521 16:02:09.817862 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/dgl 235\nI0521 16:02:10.017824 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/k5gv 375\nI0521 16:02:10.217827 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/fqg 488\nI0521 16:02:10.417774 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/zv2 289\nI0521 16:02:10.617782 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/dv7 335\nI0521 16:02:10.817773 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z4pf 251\nI0521 16:02:11.017898 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/mz8 243\nI0521 16:02:11.217799 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/mrh 421\nI0521 16:02:11.417758 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/z57 281\nI0521 16:02:11.617779 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/65q 414\nI0521 16:02:11.817776 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/w9d6 409\nI0521 16:02:12.017731 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/6wz 518\nI0521 16:02:12.217785 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/czk 424\n" STEP: limiting log lines May 21 16:02:12.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator --tail=1' May 21 16:02:12.372: INFO: stderr: "" May 21 16:02:12.372: INFO: stdout: "I0521 16:02:12.217785 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/czk 424\n" May 21 16:02:12.372: INFO: got output "I0521 16:02:12.217785 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/czk 424\n" STEP: limiting log bytes May 21 16:02:12.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator --limit-bytes=1' May 21 16:02:12.514: INFO: stderr: "" May 21 16:02:12.514: INFO: stdout: "I" May 21 16:02:12.514: INFO: got output "I" STEP: exposing timestamps May 21 16:02:12.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator --tail=1 --timestamps' May 21 16:02:12.654: INFO: stderr: "" May 21 16:02:12.654: INFO: stdout: "2021-05-21T16:02:12.618142613Z I0521 16:02:12.617756 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/cm4v 374\n" May 21 16:02:12.655: INFO: got output "2021-05-21T16:02:12.618142613Z I0521 16:02:12.617756 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/cm4v 374\n" STEP: restricting to a time range May 21 16:02:15.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator --since=1s' May 21 16:02:15.295: INFO: stderr: "" May 21 16:02:15.296: INFO: stdout: "I0521 16:02:14.417871 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/926b 433\nI0521 16:02:14.617761 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/glg 365\nI0521 16:02:14.817882 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/n4vf 517\nI0521 16:02:15.017783 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/sg8h 228\nI0521 16:02:15.217782 1 logs_generator.go:76] 32 POST /api/v1/namespaces/kube-system/pods/8n4 455\n" May 21 16:02:15.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 logs logs-generator logs-generator --since=24h' May 21 16:02:15.439: INFO: stderr: "" May 21 16:02:15.439: INFO: stdout: "I0521 16:02:08.817589 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/xwt 516\nI0521 16:02:09.017793 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/q9z 409\nI0521 16:02:09.217902 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/t5vf 580\nI0521 16:02:09.417669 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/zsz 354\nI0521 16:02:09.617851 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/q6mv 324\nI0521 16:02:09.817862 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/dgl 235\nI0521 16:02:10.017824 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/k5gv 375\nI0521 16:02:10.217827 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/fqg 488\nI0521 16:02:10.417774 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/zv2 289\nI0521 16:02:10.617782 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/dv7 335\nI0521 16:02:10.817773 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z4pf 251\nI0521 16:02:11.017898 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/mz8 243\nI0521 16:02:11.217799 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/mrh 421\nI0521 16:02:11.417758 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/z57 281\nI0521 16:02:11.617779 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/65q 414\nI0521 16:02:11.817776 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/w9d6 409\nI0521 16:02:12.017731 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/6wz 518\nI0521 16:02:12.217785 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/czk 424\nI0521 16:02:12.417757 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/2wj4 359\nI0521 16:02:12.617756 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/cm4v 374\nI0521 16:02:12.817893 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/sd2 329\nI0521 16:02:13.017771 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/4l7c 587\nI0521 16:02:13.217870 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/672p 393\nI0521 16:02:13.417719 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/b78 476\nI0521 16:02:13.617862 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/bmq 243\nI0521 16:02:13.817878 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/9l5r 397\nI0521 16:02:14.017883 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/gw8 373\nI0521 16:02:14.217873 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/8wr 422\nI0521 16:02:14.417871 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/926b 433\nI0521 16:02:14.617761 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/glg 365\nI0521 16:02:14.817882 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/n4vf 517\nI0521 16:02:15.017783 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/sg8h 228\nI0521 16:02:15.217782 1 logs_generator.go:76] 32 POST /api/v1/namespaces/kube-system/pods/8n4 455\nI0521 16:02:15.417881 1 logs_generator.go:76] 33 POST /api/v1/namespaces/ns/pods/fbv 571\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390 May 21 16:02:15.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8120 delete pod logs-generator' May 21 16:02:20.419: INFO: stderr: "" May 21 16:02:20.419: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:20.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8120" for this suite. • [SLOW TEST:12.676 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":22,"skipped":661,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:20.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:02:21.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:02:24.268: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:02:24.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:25.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7761" for this suite. STEP: Destroying namespace "webhook-7761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.015 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":23,"skipped":669,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:16.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:27.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1757" for this suite. • [SLOW TEST:11.072 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:00:59.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5070 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5070 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5070 May 21 16:00:59.691: INFO: Found 0 stateful pods, waiting for 1 May 21 16:01:09.694: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 21 16:01:09.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:09.927: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:09.927: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:09.927: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:09.930: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 21 16:01:19.935: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:19.935: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:01:19.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999492s May 21 16:01:20.956: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996534993s May 21 16:01:21.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992110475s May 21 16:01:22.964: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988481184s May 21 16:01:23.966: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984281188s May 21 16:01:24.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.981378007s May 21 16:01:25.973: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.977884674s May 21 16:01:26.977: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.974835864s May 21 16:01:27.980: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.970965423s May 21 16:01:28.983: INFO: Verifying statefulset ss doesn't scale past 1 for another 967.773918ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5070 May 21 16:01:29.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:01:30.242: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:01:30.242: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:01:30.242: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:01:30.245: INFO: Found 1 stateful pods, waiting for 3 May 21 16:01:40.249: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 21 16:01:40.249: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 21 16:01:40.249: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 21 16:01:40.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:40.507: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:40.507: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:40.507: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:40.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:40.764: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:40.764: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:40.764: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:40.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:01:41.014: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:01:41.014: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:01:41.014: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:01:41.014: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:01:41.017: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 21 16:01:51.023: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:51.024: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:51.024: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 21 16:01:51.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999587s May 21 16:01:52.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996384907s May 21 16:01:53.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991467335s May 21 16:01:54.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987127474s May 21 16:01:55.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982748554s May 21 16:01:56.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979162213s May 21 16:01:57.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974787964s May 21 16:01:58.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970295622s May 21 16:01:59.069: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.96574983s May 21 16:02:00.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.312892ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5070 May 21 16:02:01.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:02:01.316: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:02:01.316: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:02:01.316: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:02:01.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:02:01.554: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:02:01.555: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:02:01.555: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:02:01.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-5070 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:02:01.794: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:02:01.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:02:01.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:02:01.794: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 16:02:31.809: INFO: Deleting all statefulset in ns statefulset-5070 May 21 16:02:31.812: INFO: Scaling statefulset ss to 0 May 21 16:02:31.822: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:02:31.825: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:31.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5070" for this suite. • [SLOW TEST:92.196 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":24,"skipped":436,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:31.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 16:02:31.897: INFO: Waiting up to 5m0s for pod "downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda" in namespace "downward-api-2516" to be "Succeeded or Failed" May 21 16:02:31.900: INFO: Pod "downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909684ms May 21 16:02:33.904: INFO: Pod "downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007157727s STEP: Saw pod success May 21 16:02:33.904: INFO: Pod "downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda" satisfied condition "Succeeded or Failed" May 21 16:02:33.907: INFO: Trying to get logs from node kali-worker pod downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda container dapi-container: STEP: delete the pod May 21 16:02:33.921: INFO: Waiting for pod downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda to disappear May 21 16:02:33.923: INFO: Pod downward-api-5ec7cb3f-7706-44ee-909e-661b5f642eda no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:33.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2516" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":443,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:50.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6358 STEP: creating service affinity-nodeport-transition in namespace services-6358 STEP: creating replication controller affinity-nodeport-transition in namespace services-6358 I0521 16:01:50.693863 31 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6358, replica count: 3 I0521 16:01:53.744448 31 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 16:01:56.744822 31 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:01:56.755: INFO: Creating new exec pod May 21 16:01:59.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 21 16:01:59.971: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 21 16:01:59.972: INFO: stdout: "" May 21 16:01:59.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c nc -zv -t -w 2 10.96.101.116 80' May 21 16:02:00.178: INFO: stderr: "+ nc -zv -t -w 2 10.96.101.116 80\nConnection to 10.96.101.116 80 port [tcp/http] succeeded!\n" May 21 16:02:00.178: INFO: stdout: "" May 21 16:02:00.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.2 32354' May 21 16:02:00.382: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.2 32354\nConnection to 172.18.0.2 32354 port [tcp/32354] succeeded!\n" May 21 16:02:00.383: INFO: stdout: "" May 21 16:02:00.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 32354' May 21 16:02:00.595: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 32354\nConnection to 172.18.0.4 32354 port [tcp/32354] succeeded!\n" May 21 16:02:00.595: INFO: stdout: "" May 21 16:02:00.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.2:32354/ ; done' May 21 16:02:00.938: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n" May 21 16:02:00.938: INFO: stdout: "\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-mthdd" May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:00.938: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:00.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.2:32354/ ; done' May 21 16:02:01.305: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n" May 21 16:02:01.306: INFO: stdout: "\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-mthdd\naffinity-nodeport-transition-nzmfc\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-mthdd" May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-nzmfc May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:01.306: INFO: Received response from host: affinity-nodeport-transition-mthdd May 21 16:02:31.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-6358 exec execpod-affinitytp675 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.2:32354/ ; done' May 21 16:02:31.658: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32354/\n" May 21 16:02:31.658: INFO: stdout: "\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz\naffinity-nodeport-transition-dsjsz" May 21 16:02:31.658: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Received response from host: affinity-nodeport-transition-dsjsz May 21 16:02:31.659: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6358, will wait for the garbage collector to delete the pods May 21 16:02:31.725: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.221436ms May 21 16:02:31.825: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.316333ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6358" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:49.790 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":463,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:40.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:02:40.491: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:41.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5624" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":25,"skipped":472,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:51.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-272383d1-6879-4110-b709-3c7acc3875af in namespace container-probe-3537 May 21 16:01:55.356: INFO: Started pod busybox-272383d1-6879-4110-b709-3c7acc3875af in namespace container-probe-3537 STEP: checking the pod's current state and verifying that restartCount is present May 21 16:01:55.359: INFO: Initial restart count of pod busybox-272383d1-6879-4110-b709-3c7acc3875af is 0 May 21 16:02:41.448: INFO: Restart count of pod container-probe-3537/busybox-272383d1-6879-4110-b709-3c7acc3875af is now 1 (46.089024945s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:41.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3537" for this suite. • [SLOW TEST:50.147 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":277,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:41.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-8790 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8790 to expose endpoints map[] May 21 16:02:41.089: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found May 21 16:02:42.098: INFO: successfully validated that service multi-endpoint-test in namespace services-8790 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8790 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8790 to expose endpoints map[pod1:[100]] May 21 16:02:44.119: INFO: successfully validated that service multi-endpoint-test in namespace services-8790 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-8790 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8790 to expose endpoints map[pod1:[100] pod2:[101]] May 21 16:02:46.139: INFO: successfully validated that service multi-endpoint-test in namespace services-8790 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-8790 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8790 to expose endpoints map[pod2:[101]] May 21 16:02:46.154: INFO: successfully validated that service multi-endpoint-test in namespace services-8790 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-8790 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8790 to expose endpoints map[] May 21 16:02:46.164: INFO: successfully validated that service multi-endpoint-test in namespace services-8790 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:46.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8790" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:5.136 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":26,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:46.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 21 16:02:46.240: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2714 proxy --unix-socket=/tmp/kubectl-proxy-unix226093742/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:46.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2714" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":27,"skipped":502,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:46.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:46.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6687" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":28,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:41.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3891 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3891 STEP: Deleting pre-stop pod May 21 16:02:50.543: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:50.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3891" for this suite. • [SLOW TEST:9.083 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":22,"skipped":283,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 21 16:02:38.012: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:38.015: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:40.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:40.019: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:42.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:42.019: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:44.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:44.020: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:46.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:46.019: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:48.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:48.019: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:50.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:50.020: INFO: Pod pod-with-poststart-exec-hook still exists May 21 16:02:52.015: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 16:02:52.019: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:52.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6721" for this suite. • [SLOW TEST:18.088 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 15:58:49.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-423ff6dc-5db0-4742-91e3-b16b155debdb in namespace container-probe-8652 May 21 15:58:51.832: INFO: Started pod busybox-423ff6dc-5db0-4742-91e3-b16b155debdb in namespace container-probe-8652 STEP: checking the pod's current state and verifying that restartCount is present May 21 15:58:51.836: INFO: Initial restart count of pod busybox-423ff6dc-5db0-4742-91e3-b16b155debdb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8652" for this suite. • [SLOW TEST:242.550 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:52.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:02:52.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f" in namespace "projected-5699" to be "Succeeded or Failed" May 21 16:02:52.407: INFO: Pod "downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565193ms May 21 16:02:54.410: INFO: Pod "downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005870327s STEP: Saw pod success May 21 16:02:54.411: INFO: Pod "downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f" satisfied condition "Succeeded or Failed" May 21 16:02:54.413: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f container client-container: STEP: delete the pod May 21 16:02:54.427: INFO: Waiting for pod downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f to disappear May 21 16:02:54.430: INFO: Pod downwardapi-volume-0d2fe366-ace3-462f-ac28-c2c1e84b173f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:54.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5699" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:52.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 21 16:02:52.627: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:02:55.646: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:02:55.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:56.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1341" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":27,"skipped":466,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":98,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:54.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:02:55.420: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:02:58.439: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:02:58.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3536" for this suite. STEP: Destroying namespace "webhook-3536-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:58.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:02:58.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d" in namespace "downward-api-3859" to be "Succeeded or Failed" May 21 16:02:58.598: INFO: Pod "downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064954ms May 21 16:03:00.603: INFO: Pod "downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008515845s STEP: Saw pod success May 21 16:03:00.603: INFO: Pod "downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d" satisfied condition "Succeeded or Failed" May 21 16:03:00.606: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d container client-container: STEP: delete the pod May 21 16:03:00.622: INFO: Waiting for pod downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d to disappear May 21 16:03:00.625: INFO: Pod downwardapi-volume-e36012d9-ac44-4862-ba6a-23f2ed6a469d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:00.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3859" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":103,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:56.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 16:02:56.847: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:01.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9500" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":28,"skipped":468,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:01.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:01.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5824" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":29,"skipped":477,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:50.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:02:50.599: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 16:02:54.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4717 --namespace=crd-publish-openapi-4717 create -f -' May 21 16:02:55.012: INFO: stderr: "" May 21 16:02:55.012: INFO: stdout: "e2e-test-crd-publish-openapi-8351-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 21 16:02:55.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4717 --namespace=crd-publish-openapi-4717 delete e2e-test-crd-publish-openapi-8351-crds test-cr' May 21 16:02:55.136: INFO: stderr: "" May 21 16:02:55.136: INFO: stdout: "e2e-test-crd-publish-openapi-8351-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 21 16:02:55.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4717 --namespace=crd-publish-openapi-4717 apply -f -' May 21 16:02:55.397: INFO: stderr: "" May 21 16:02:55.397: INFO: stdout: "e2e-test-crd-publish-openapi-8351-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 21 16:02:55.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4717 --namespace=crd-publish-openapi-4717 delete e2e-test-crd-publish-openapi-8351-crds test-cr' May 21 16:02:55.525: INFO: stderr: "" May 21 16:02:55.525: INFO: stdout: "e2e-test-crd-publish-openapi-8351-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 21 16:02:55.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4717 explain e2e-test-crd-publish-openapi-8351-crds' May 21 16:02:55.784: INFO: stderr: "" May 21 16:02:55.784: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8351-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:02.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4717" for this suite. • [SLOW TEST:11.702 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":23,"skipped":286,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:00.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b2757776-2add-4939-8d83-28e4cda7c179 STEP: Creating a pod to test consume secrets May 21 16:03:00.697: INFO: Waiting up to 5m0s for pod "pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178" in namespace "secrets-4248" to be "Succeeded or Failed" May 21 16:03:00.700: INFO: Pod "pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178": Phase="Pending", Reason="", readiness=false. Elapsed: 3.045924ms May 21 16:03:02.705: INFO: Pod "pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008510896s STEP: Saw pod success May 21 16:03:02.705: INFO: Pod "pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178" satisfied condition "Succeeded or Failed" May 21 16:03:02.709: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178 container secret-volume-test: STEP: delete the pod May 21 16:03:02.728: INFO: Waiting for pod pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178 to disappear May 21 16:03:02.731: INFO: Pod pod-secrets-dffe27ed-2ec5-4e95-957d-6100f1a2b178 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:02.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4248" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:01.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:03:01.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08" in namespace "downward-api-8237" to be "Succeeded or Failed" May 21 16:03:01.858: INFO: Pod "downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638146ms May 21 16:03:03.862: INFO: Pod "downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006040272s STEP: Saw pod success May 21 16:03:03.862: INFO: Pod "downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08" satisfied condition "Succeeded or Failed" May 21 16:03:03.865: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08 container client-container: STEP: delete the pod May 21 16:03:03.881: INFO: Waiting for pod downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08 to disappear May 21 16:03:03.884: INFO: Pod downwardapi-volume-cc985628-6364-4bb8-9aaf-2598bd059b08 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:03.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8237" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":111,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:02.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 21 16:03:02.778: INFO: Waiting up to 5m0s for pod "pod-237d5aab-5887-4c5f-90a3-f0a636b85274" in namespace "emptydir-4075" to be "Succeeded or Failed" May 21 16:03:02.781: INFO: Pod "pod-237d5aab-5887-4c5f-90a3-f0a636b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.590706ms May 21 16:03:04.784: INFO: Pod "pod-237d5aab-5887-4c5f-90a3-f0a636b85274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00630284s STEP: Saw pod success May 21 16:03:04.784: INFO: Pod "pod-237d5aab-5887-4c5f-90a3-f0a636b85274" satisfied condition "Succeeded or Failed" May 21 16:03:04.788: INFO: Trying to get logs from node kali-worker2 pod pod-237d5aab-5887-4c5f-90a3-f0a636b85274 container test-container: STEP: delete the pod May 21 16:03:04.803: INFO: Waiting for pod pod-237d5aab-5887-4c5f-90a3-f0a636b85274 to disappear May 21 16:03:04.806: INFO: Pod pod-237d5aab-5887-4c5f-90a3-f0a636b85274 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:04.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4075" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:04.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 21 16:03:04.908: INFO: Waiting up to 5m0s for pod "pod-335b78d4-148b-44be-9548-191a6d0d98fb" in namespace "emptydir-7437" to be "Succeeded or Failed" May 21 16:03:04.911: INFO: Pod "pod-335b78d4-148b-44be-9548-191a6d0d98fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771647ms May 21 16:03:06.914: INFO: Pod "pod-335b78d4-148b-44be-9548-191a6d0d98fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005519382s STEP: Saw pod success May 21 16:03:06.914: INFO: Pod "pod-335b78d4-148b-44be-9548-191a6d0d98fb" satisfied condition "Succeeded or Failed" May 21 16:03:06.916: INFO: Trying to get logs from node kali-worker pod pod-335b78d4-148b-44be-9548-191a6d0d98fb container test-container: STEP: delete the pod May 21 16:03:06.928: INFO: Waiting for pod pod-335b78d4-148b-44be-9548-191a6d0d98fb to disappear May 21 16:03:06.931: INFO: Pod pod-335b78d4-148b-44be-9548-191a6d0d98fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:06.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7437" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":137,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:06.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 21 16:03:08.986: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:08.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7120" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":139,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:02.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:02.316: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 21 16:03:02.322: INFO: Pod name sample-pod: Found 0 pods out of 1 May 21 16:03:07.326: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 21 16:03:07.326: INFO: Creating deployment "test-rolling-update-deployment" May 21 16:03:07.331: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 21 16:03:07.336: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 21 16:03:09.344: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 21 16:03:09.348: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 21 16:03:09.358: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5163 /apis/apps/v1/namespaces/deployment-5163/deployments/test-rolling-update-deployment eb08dd1d-5a50-48f2-abf0-dddc929b46fa 24932 1 2021-05-21 16:03:07 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-05-21 16:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 16:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050355b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-21 16:03:07 +0000 UTC,LastTransitionTime:2021-05-21 16:03:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2021-05-21 16:03:08 +0000 UTC,LastTransitionTime:2021-05-21 16:03:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 21 16:03:09.362: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-5163 /apis/apps/v1/namespaces/deployment-5163/replicasets/test-rolling-update-deployment-c4cb8d6d9 39b3ee3e-9e67-4ccd-af18-4a37e55cde2f 24921 1 2021-05-21 16:03:07 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment eb08dd1d-5a50-48f2-abf0-dddc929b46fa 0xc005035be0 0xc005035be1}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb08dd1d-5a50-48f2-abf0-dddc929b46fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005035c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 21 16:03:09.362: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 21 16:03:09.362: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5163 /apis/apps/v1/namespaces/deployment-5163/replicasets/test-rolling-update-controller 76e396a8-cce3-454a-a8ba-ac4d464395f7 24931 2 2021-05-21 16:03:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment eb08dd1d-5a50-48f2-abf0-dddc929b46fa 0xc005035ad7 0xc005035ad8}] [] [{e2e.test Update apps/v1 2021-05-21 16:03:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 16:03:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb08dd1d-5a50-48f2-abf0-dddc929b46fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005035b78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 16:03:09.366: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-clb9z" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-clb9z test-rolling-update-deployment-c4cb8d6d9- deployment-5163 /api/v1/namespaces/deployment-5163/pods/test-rolling-update-deployment-c4cb8d6d9-clb9z e3733883-35a9-40c1-aa0a-10f8e887eea0 24920 0 2021-05-21 16:03:07 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.183" ], "mac": "76:a0:26:6b:66:07", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.183" ], "mac": "76:a0:26:6b:66:07", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 39b3ee3e-9e67-4ccd-af18-4a37e55cde2f 0xc00506a2d0 0xc00506a2d1}] [] [{kube-controller-manager Update v1 2021-05-21 16:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39b3ee3e-9e67-4ccd-af18-4a37e55cde2f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:03:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tmbxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tmbxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tmbxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.183,StartTime:2021-05-21 16:03:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:03:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://9ad817704f921dd84e4b6ea3652b9b223f12f796b5af06c8b453c195d6e0559d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:09.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5163" for this suite. • [SLOW TEST:7.082 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":24,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:01:52.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8414 STEP: creating service affinity-clusterip-transition in namespace services-8414 STEP: creating replication controller affinity-clusterip-transition in namespace services-8414 I0521 16:01:52.195358 18 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8414, replica count: 3 I0521 16:01:55.245894 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 16:01:58.246207 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:01:58.251: INFO: Creating new exec pod May 21 16:02:01.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 21 16:02:01.544: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 21 16:02:01.544: INFO: stdout: "" May 21 16:02:01.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c nc -zv -t -w 2 10.96.64.18 80' May 21 16:02:01.785: INFO: stderr: "+ nc -zv -t -w 2 10.96.64.18 80\nConnection to 10.96.64.18 80 port [tcp/http] succeeded!\n" May 21 16:02:01.785: INFO: stdout: "" May 21 16:02:01.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.64.18:80/ ; done' May 21 16:02:02.131: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n" May 21 16:02:02.131: INFO: stdout: "\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf" May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.131: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:02.132: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.64.18:80/ ; done' May 21 16:02:32.510: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n" May 21 16:02:32.511: INFO: stdout: "\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-npkxk" May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.511: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.64.18:80/ ; done' May 21 16:02:32.872: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n" May 21 16:02:32.873: INFO: stdout: "\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-drxnf\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-npkxk\naffinity-clusterip-transition-npkxk" May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-drxnf May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:02:32.873: INFO: Received response from host: affinity-clusterip-transition-npkxk May 21 16:03:02.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-8414 exec execpod-affinitybvz8b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.64.18:80/ ; done' May 21 16:03:03.197: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.64.18:80/\n" May 21 16:03:03.197: INFO: stdout: "\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6\naffinity-clusterip-transition-t5sg6" May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Received response from host: affinity-clusterip-transition-t5sg6 May 21 16:03:03.197: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8414, will wait for the garbage collector to delete the pods May 21 16:03:03.266: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.516822ms May 21 16:03:03.366: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.330672ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:10.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8414" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:78.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":388,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:27.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 16:02:27.441: INFO: PodSpec: initContainers in spec.initContainers May 21 16:03:12.804: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-65a43c8d-1ec1-40b0-8b09-622273e3f41e", GenerateName:"", Namespace:"init-container-5394", SelfLink:"/api/v1/namespaces/init-container-5394/pods/pod-init-65a43c8d-1ec1-40b0-8b09-622273e3f41e", UID:"b141d322-2300-487e-a5f6-5b8eba12371d", ResourceVersion:"25047", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757209747, loc:(*time.Location)(0x770e980)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"441440624"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.171\"\n ],\n \"mac\": \"6e:50:9e:d5:93:7d\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.171\"\n ],\n \"mac\": \"6e:50:9e:d5:93:7d\",\n \"default\": true,\n \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ec7a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ec7c0)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ec820), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ec860)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032ec8a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032ec8e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7k56x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00440bdc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7k56x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7k56x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7k56x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002359a68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003d85500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002359af0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002359b10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002359b18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002359b1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc000cbf520), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209747, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209747, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209747, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209747, loc:(*time.Location)(0x770e980)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.171", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.171"}}, StartTime:(*v1.Time)(0xc0032ec920), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003d855e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003d85650)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dcbf12828e024066b58e2bb5adc7014a9c6e375a69abc4367c44cc2df148e0e4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032ec960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032ec940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002359b9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:12.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5394" for this suite. • [SLOW TEST:45.399 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":6,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:09.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:09.049: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 21 16:03:14.052: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 21 16:03:14.052: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 May 21 16:03:14.066: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2311 /apis/apps/v1/namespaces/deployment-2311/deployments/test-cleanup-deployment 4534b13e-c0e8-4ca6-a401-8c77dadea28c 25075 1 2021-05-21 16:03:14 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-05-21 16:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0047495f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 21 16:03:14.070: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-2311 /apis/apps/v1/namespaces/deployment-2311/replicasets/test-cleanup-deployment-5d446bdd47 f421756f-9ef6-477c-91fa-c7ebcb5ef7ef 25078 1 2021-05-21 16:03:14 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 4534b13e-c0e8-4ca6-a401-8c77dadea28c 0xc004749ac7 0xc004749ac8}] [] [{kube-controller-manager Update apps/v1 2021-05-21 16:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4534b13e-c0e8-4ca6-a401-8c77dadea28c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004749b58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 16:03:14.071: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 21 16:03:14.071: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2311 /apis/apps/v1/namespaces/deployment-2311/replicasets/test-cleanup-controller f2868d1c-39a6-4bd5-862d-1893c7444cb5 25076 1 2021-05-21 16:03:09 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 4534b13e-c0e8-4ca6-a401-8c77dadea28c 0xc0047499b7 0xc0047499b8}] [] [{e2e.test Update apps/v1 2021-05-21 16:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 16:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"4534b13e-c0e8-4ca6-a401-8c77dadea28c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004749a58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 21 16:03:14.074: INFO: Pod "test-cleanup-controller-rxrmd" is available: &Pod{ObjectMeta:{test-cleanup-controller-rxrmd test-cleanup-controller- deployment-2311 /api/v1/namespaces/deployment-2311/pods/test-cleanup-controller-rxrmd f93b5aef-0107-4b95-acff-d33b82c84e96 25009 0 2021-05-21 16:03:09 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.185" ], "mac": "0e:c2:d7:bb:53:98", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.185" ], "mac": "0e:c2:d7:bb:53:98", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-cleanup-controller f2868d1c-39a6-4bd5-862d-1893c7444cb5 0xc003911da7 0xc003911da8}] [] [{kube-controller-manager Update v1 2021-05-21 16:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2868d1c-39a6-4bd5-862d-1893c7444cb5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:03:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.185\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgtnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgtnk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgtnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:03:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.185,StartTime:2021-05-21 16:03:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:03:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c58528cfe1b7438f5065f8fd96d24fb964f3fe8f0294ad42d072f11ea6f7a383,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:03:14.075: INFO: Pod "test-cleanup-deployment-5d446bdd47-mdrnf" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-mdrnf test-cleanup-deployment-5d446bdd47- deployment-2311 /api/v1/namespaces/deployment-2311/pods/test-cleanup-deployment-5d446bdd47-mdrnf e968b3a9-54a1-4db4-9db1-4444b7f7e296 25081 0 2021-05-21 16:03:14 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 f421756f-9ef6-477c-91fa-c7ebcb5ef7ef 0xc003911f67 0xc003911f68}] [] [{kube-controller-manager Update v1 2021-05-21 16:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f421756f-9ef6-477c-91fa-c7ebcb5ef7ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rgtnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rgtnk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rgtnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:14.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2311" for this suite. • [SLOW TEST:5.069 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":16,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:17.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 21 16:02:17.837: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 23489 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:02:17.838: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 23489 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 21 16:02:27.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 23793 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:02:27.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 23793 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 21 16:02:37.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 24033 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:02:37.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 24033 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 21 16:02:47.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 24287 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:02:47.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-a 55f4004b-663f-49e6-bcd2-2c5b12b7bcfb 24287 0 2021-05-21 16:02:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 21 16:02:57.869: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-b d738ec2a-367f-4db5-ae13-288fea972f2d 24549 0 2021-05-21 16:02:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:02:57.869: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-b d738ec2a-367f-4db5-ae13-288fea972f2d 24549 0 2021-05-21 16:02:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 21 16:03:07.873: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-b d738ec2a-367f-4db5-ae13-288fea972f2d 24903 0 2021-05-21 16:02:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 16:03:07.874: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5466 /api/v1/namespaces/watch-5466/configmaps/e2e-watch-test-configmap-b d738ec2a-367f-4db5-ae13-288fea972f2d 24903 0 2021-05-21 16:02:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-21 16:02:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:17.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5466" for this suite. • [SLOW TEST:60.082 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":11,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:09.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 21 16:03:13.522: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 21 16:03:13.525: INFO: Pod pod-with-poststart-http-hook still exists May 21 16:03:15.525: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 21 16:03:15.528: INFO: Pod pod-with-poststart-http-hook still exists May 21 16:03:17.525: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 21 16:03:17.530: INFO: Pod pod-with-poststart-http-hook still exists May 21 16:03:19.525: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 21 16:03:19.529: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:19.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6417" for this suite. • [SLOW TEST:10.080 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:18.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5b1e46cb-b1d5-499f-861a-0daf87d7b3a0 STEP: Creating a pod to test consume configMaps May 21 16:03:18.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c" in namespace "projected-2381" to be "Succeeded or Failed" May 21 16:03:18.049: INFO: Pod "pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769903ms May 21 16:03:20.052: INFO: Pod "pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006091272s May 21 16:03:22.056: INFO: Pod "pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010166795s STEP: Saw pod success May 21 16:03:22.056: INFO: Pod "pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c" satisfied condition "Succeeded or Failed" May 21 16:03:22.059: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c container projected-configmap-volume-test: STEP: delete the pod May 21 16:03:22.074: INFO: Waiting for pod pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c to disappear May 21 16:03:22.077: INFO: Pod pod-projected-configmaps-ba43525e-76db-4f3f-897a-46b1267a518c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:22.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2381" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:22.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:22.548: INFO: Checking APIGroup: apiregistration.k8s.io May 21 16:03:22.549: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 21 16:03:22.549: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.549: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 21 16:03:22.549: INFO: Checking APIGroup: extensions May 21 16:03:22.550: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 21 16:03:22.550: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 21 16:03:22.550: INFO: extensions/v1beta1 matches extensions/v1beta1 May 21 16:03:22.550: INFO: Checking APIGroup: apps May 21 16:03:22.551: INFO: PreferredVersion.GroupVersion: apps/v1 May 21 16:03:22.552: INFO: Versions found [{apps/v1 v1}] May 21 16:03:22.552: INFO: apps/v1 matches apps/v1 May 21 16:03:22.552: INFO: Checking APIGroup: events.k8s.io May 21 16:03:22.553: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 21 16:03:22.553: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.553: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 21 16:03:22.553: INFO: Checking APIGroup: authentication.k8s.io May 21 16:03:22.554: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 21 16:03:22.554: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.554: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 21 16:03:22.554: INFO: Checking APIGroup: authorization.k8s.io May 21 16:03:22.555: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 21 16:03:22.555: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.555: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 21 16:03:22.555: INFO: Checking APIGroup: autoscaling May 21 16:03:22.556: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 21 16:03:22.556: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 21 16:03:22.556: INFO: autoscaling/v1 matches autoscaling/v1 May 21 16:03:22.556: INFO: Checking APIGroup: batch May 21 16:03:22.557: INFO: PreferredVersion.GroupVersion: batch/v1 May 21 16:03:22.557: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 21 16:03:22.557: INFO: batch/v1 matches batch/v1 May 21 16:03:22.557: INFO: Checking APIGroup: certificates.k8s.io May 21 16:03:22.558: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 21 16:03:22.558: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.558: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 21 16:03:22.558: INFO: Checking APIGroup: networking.k8s.io May 21 16:03:22.560: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 21 16:03:22.560: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.560: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 21 16:03:22.560: INFO: Checking APIGroup: policy May 21 16:03:22.561: INFO: PreferredVersion.GroupVersion: policy/v1beta1 May 21 16:03:22.561: INFO: Versions found [{policy/v1beta1 v1beta1}] May 21 16:03:22.561: INFO: policy/v1beta1 matches policy/v1beta1 May 21 16:03:22.561: INFO: Checking APIGroup: rbac.authorization.k8s.io May 21 16:03:22.562: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 21 16:03:22.562: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.562: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 21 16:03:22.562: INFO: Checking APIGroup: storage.k8s.io May 21 16:03:22.563: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 21 16:03:22.563: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.563: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 21 16:03:22.564: INFO: Checking APIGroup: admissionregistration.k8s.io May 21 16:03:22.565: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 21 16:03:22.565: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.565: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 21 16:03:22.565: INFO: Checking APIGroup: apiextensions.k8s.io May 21 16:03:22.566: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 21 16:03:22.566: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.566: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 21 16:03:22.566: INFO: Checking APIGroup: scheduling.k8s.io May 21 16:03:22.567: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 21 16:03:22.567: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.567: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 21 16:03:22.567: INFO: Checking APIGroup: coordination.k8s.io May 21 16:03:22.569: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 21 16:03:22.569: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.569: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 21 16:03:22.569: INFO: Checking APIGroup: node.k8s.io May 21 16:03:22.570: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 May 21 16:03:22.570: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.570: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 May 21 16:03:22.570: INFO: Checking APIGroup: discovery.k8s.io May 21 16:03:22.571: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 May 21 16:03:22.571: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] May 21 16:03:22.571: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 May 21 16:03:22.571: INFO: Checking APIGroup: k8s.cni.cncf.io May 21 16:03:22.572: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 21 16:03:22.572: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 21 16:03:22.572: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 21 16:03:22.572: INFO: Checking APIGroup: projectcontour.io May 21 16:03:22.574: INFO: PreferredVersion.GroupVersion: projectcontour.io/v1 May 21 16:03:22.574: INFO: Versions found [{projectcontour.io/v1 v1} {projectcontour.io/v1alpha1 v1alpha1}] May 21 16:03:22.574: INFO: projectcontour.io/v1 matches projectcontour.io/v1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:22.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7075" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":13,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:22.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 21 16:03:22.681: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:22.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5323" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":14,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:11.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0521 16:02:21.708730 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 16:03:23.726: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 21 16:03:23.726: INFO: Deleting pod "simpletest-rc-to-be-deleted-9sdzn" in namespace "gc-1534" May 21 16:03:23.732: INFO: Deleting pod "simpletest-rc-to-be-deleted-crhwf" in namespace "gc-1534" May 21 16:03:23.739: INFO: Deleting pod "simpletest-rc-to-be-deleted-ctrst" in namespace "gc-1534" May 21 16:03:23.746: INFO: Deleting pod "simpletest-rc-to-be-deleted-ghk5z" in namespace "gc-1534" May 21 16:03:23.752: INFO: Deleting pod "simpletest-rc-to-be-deleted-jrzvt" in namespace "gc-1534" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:23.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1534" for this suite. • [SLOW TEST:72.160 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:10.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 21 16:03:10.328: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4463" for this suite. • [SLOW TEST:21.467 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":31,"skipped":393,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:25.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0521 16:02:31.531443 33 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 16:03:33.554: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:33.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8415" for this suite. • [SLOW TEST:68.087 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":23,"skipped":372,"failed":0} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:23.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:23.792: INFO: Creating ReplicaSet my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230 May 21 16:03:23.799: INFO: Pod name my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230: Found 0 pods out of 1 May 21 16:03:28.803: INFO: Pod name my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230: Found 1 pods out of 1 May 21 16:03:28.803: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230" is running May 21 16:03:28.805: INFO: Pod "my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230-mlc6x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:03:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:03:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:03:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-21 16:03:23 +0000 UTC Reason: Message:}]) May 21 16:03:28.806: INFO: Trying to dial the pod May 21 16:03:33.817: INFO: Controller my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230: Got expected result from replica 1 [my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230-mlc6x]: "my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230-mlc6x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:33.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1160" for this suite. • [SLOW TEST:10.060 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":24,"skipped":372,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:31.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 21 16:03:31.825: INFO: Waiting up to 5m0s for pod "pod-8908e4bb-5544-44b6-9845-230355b9285a" in namespace "emptydir-3215" to be "Succeeded or Failed" May 21 16:03:31.828: INFO: Pod "pod-8908e4bb-5544-44b6-9845-230355b9285a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703826ms May 21 16:03:33.831: INFO: Pod "pod-8908e4bb-5544-44b6-9845-230355b9285a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005979375s STEP: Saw pod success May 21 16:03:33.831: INFO: Pod "pod-8908e4bb-5544-44b6-9845-230355b9285a" satisfied condition "Succeeded or Failed" May 21 16:03:33.834: INFO: Trying to get logs from node kali-worker pod pod-8908e4bb-5544-44b6-9845-230355b9285a container test-container: STEP: delete the pod May 21 16:03:33.848: INFO: Waiting for pod pod-8908e4bb-5544-44b6-9845-230355b9285a to disappear May 21 16:03:33.850: INFO: Pod pod-8908e4bb-5544-44b6-9845-230355b9285a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:33.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3215" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:33.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods May 21 16:03:33.922: INFO: created test-pod-1 May 21 16:03:33.926: INFO: created test-pod-2 May 21 16:03:33.932: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:33.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-814" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":25,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":24,"skipped":677,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 21 16:03:33.595: INFO: Waiting up to 5m0s for pod "pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827" in namespace "emptydir-634" to be "Succeeded or Failed" May 21 16:03:33.597: INFO: Pod "pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168732ms May 21 16:03:35.601: INFO: Pod "pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006048714s STEP: Saw pod success May 21 16:03:35.601: INFO: Pod "pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827" satisfied condition "Succeeded or Failed" May 21 16:03:35.604: INFO: Trying to get logs from node kali-worker pod pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827 container test-container: STEP: delete the pod May 21 16:03:35.617: INFO: Waiting for pod pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827 to disappear May 21 16:03:35.620: INFO: Pod pod-3784dfc2-02ec-4d08-a3e6-d4eda188b827 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:35.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-634" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":677,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:03.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4213;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4213;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4213.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.5_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4213;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4213;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4213.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.39.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.39.5_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:03:06.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.039: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.047: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.051: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.059: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.090: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.094: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.097: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.105: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.112: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.116: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:06.140: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc jessie_udp@_http._tcp.dns-test-service.dns-4213.svc jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc] May 21 16:03:11.150: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.152: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.155: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.157: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.188: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.194: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.197: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.200: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:11.230: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc] May 21 16:03:16.145: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.148: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.193: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.197: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.200: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.204: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.208: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:16.239: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc] May 21 16:03:21.145: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.149: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.153: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.157: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.164: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.202: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.206: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.210: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.213: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.217: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:21.250: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc] May 21 16:03:26.145: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.150: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.154: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.159: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.203: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.207: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.210: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.214: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.218: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:26.253: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc] May 21 16:03:31.145: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.148: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.193: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.199: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.205: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6: the server could not find the requested resource (get pods dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6) May 21 16:03:31.235: INFO: Lookups using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc] May 21 16:03:36.235: INFO: DNS probes using dns-4213/dns-test-a0c8ba88-891e-496e-bbf2-e56f32402aa6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:36.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4213" for this suite. • [SLOW TEST:32.294 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":530,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:19.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:03:20.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:03:22.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209800, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209800, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209800, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209800, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:03:25.499: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:36.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7612" for this suite. STEP: Destroying namespace "webhook-7612-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":26,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:36.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version May 21 16:03:36.802: INFO: Major version: 1 STEP: Confirm minor version May 21 16:03:36.802: INFO: cleanMinorVersion: 19 May 21 16:03:36.802: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:36.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-8382" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:33.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 21 16:03:34.579: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:03:34.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:03:37.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:37.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2715" for this suite. STEP: Destroying namespace "webhook-2715-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":33,"skipped":438,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:33.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 16:03:34.020: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:37.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-332" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":26,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:36.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:03:36.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee" in namespace "projected-2327" to be "Succeeded or Failed" May 21 16:03:36.314: INFO: Pod "downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180738ms May 21 16:03:38.318: INFO: Pod "downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005383178s STEP: Saw pod success May 21 16:03:38.318: INFO: Pod "downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee" satisfied condition "Succeeded or Failed" May 21 16:03:38.320: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee container client-container: STEP: delete the pod May 21 16:03:38.332: INFO: Waiting for pod downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee to disappear May 21 16:03:38.334: INFO: Pod downwardapi-volume-cca79342-94a6-451a-9ccb-d546cc01c5ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:38.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2327" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":538,"failed":0} S ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":27,"skipped":402,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:36.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4983.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4983.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4983.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4983.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:03:40.881: INFO: DNS probes using dns-4983/dns-test-bafc1127-22c3-44fd-8874-ea3fe019a60e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:40.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4983" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":402,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:22.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 21 16:03:22.835: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5793" for this suite. • [SLOW TEST:18.674 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":352,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:37.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7e8f5f1e-c04d-463a-b976-e48f8b22b440 STEP: Creating a pod to test consume secrets May 21 16:03:37.744: INFO: Waiting up to 5m0s for pod "pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061" in namespace "secrets-1026" to be "Succeeded or Failed" May 21 16:03:37.747: INFO: Pod "pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514379ms May 21 16:03:39.751: INFO: Pod "pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006688658s May 21 16:03:41.754: INFO: Pod "pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009629513s STEP: Saw pod success May 21 16:03:41.754: INFO: Pod "pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061" satisfied condition "Succeeded or Failed" May 21 16:03:41.756: INFO: Trying to get logs from node kali-worker pod pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061 container secret-volume-test: STEP: delete the pod May 21 16:03:41.767: INFO: Waiting for pod pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061 to disappear May 21 16:03:41.768: INFO: Pod pod-secrets-96d94be1-9915-4199-af17-206c9c1d7061 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:41.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1026" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":447,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:37.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-c37e895f-8e89-4f83-b381-46f8f02cc3b1 STEP: Creating a pod to test consume secrets May 21 16:03:37.988: INFO: Waiting up to 5m0s for pod "pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd" in namespace "secrets-2438" to be "Succeeded or Failed" May 21 16:03:37.990: INFO: Pod "pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473303ms May 21 16:03:39.993: INFO: Pod "pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005206047s May 21 16:03:41.996: INFO: Pod "pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008526698s STEP: Saw pod success May 21 16:03:41.996: INFO: Pod "pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd" satisfied condition "Succeeded or Failed" May 21 16:03:41.999: INFO: Trying to get logs from node kali-worker pod pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd container secret-volume-test: STEP: delete the pod May 21 16:03:42.015: INFO: Waiting for pod pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd to disappear May 21 16:03:42.019: INFO: Pod pod-secrets-76e53d8b-5d9e-41c9-ba6b-a75942462fcd no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:42.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2438" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:38.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:38.375: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-bc96c3de-4553-41a7-8b68-9d8e631667d7" in namespace "security-context-test-1925" to be "Succeeded or Failed" May 21 16:03:38.377: INFO: Pod "busybox-readonly-false-bc96c3de-4553-41a7-8b68-9d8e631667d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179251ms May 21 16:03:40.381: INFO: Pod "busybox-readonly-false-bc96c3de-4553-41a7-8b68-9d8e631667d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005884431s May 21 16:03:42.386: INFO: Pod "busybox-readonly-false-bc96c3de-4553-41a7-8b68-9d8e631667d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01099235s May 21 16:03:42.386: INFO: Pod "busybox-readonly-false-bc96c3de-4553-41a7-8b68-9d8e631667d7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:42.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1925" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:35.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 21 16:03:35.660: INFO: namespace kubectl-9062 May 21 16:03:35.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9062 create -f -' May 21 16:03:35.998: INFO: stderr: "" May 21 16:03:35.999: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 21 16:03:37.002: INFO: Selector matched 1 pods for map[app:agnhost] May 21 16:03:37.002: INFO: Found 0 / 1 May 21 16:03:38.002: INFO: Selector matched 1 pods for map[app:agnhost] May 21 16:03:38.002: INFO: Found 1 / 1 May 21 16:03:38.002: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 21 16:03:38.004: INFO: Selector matched 1 pods for map[app:agnhost] May 21 16:03:38.004: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 21 16:03:38.004: INFO: wait on agnhost-primary startup in kubectl-9062 May 21 16:03:38.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9062 logs agnhost-primary-7r29j agnhost-primary' May 21 16:03:38.154: INFO: stderr: "" May 21 16:03:38.154: INFO: stdout: "Paused\n" STEP: exposing RC May 21 16:03:38.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9062 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 21 16:03:38.300: INFO: stderr: "" May 21 16:03:38.300: INFO: stdout: "service/rm2 exposed\n" May 21 16:03:38.303: INFO: Service rm2 in namespace kubectl-9062 found. STEP: exposing service May 21 16:03:40.309: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9062 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 21 16:03:40.442: INFO: stderr: "" May 21 16:03:40.442: INFO: stdout: "service/rm3 exposed\n" May 21 16:03:40.444: INFO: Service rm3 in namespace kubectl-9062 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:42.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9062" for this suite. • [SLOW TEST:6.825 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1222 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":26,"skipped":679,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:41.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:44.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6400" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":35,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:41.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 21 16:03:41.865: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 21 16:03:43.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209821, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209821, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209821, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209821, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:03:46.884: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:03:46.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:48.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3579" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.574 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":16,"skipped":359,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:42.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 21 16:03:42.107: INFO: Pod name pod-release: Found 0 pods out of 1 May 21 16:03:47.110: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:48.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-882" for this suite. • [SLOW TEST:6.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":28,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:44.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 21 16:03:44.918: INFO: Waiting up to 5m0s for pod "var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8" in namespace "var-expansion-205" to be "Succeeded or Failed" May 21 16:03:44.921: INFO: Pod "var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770235ms May 21 16:03:46.924: INFO: Pod "var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005758612s May 21 16:03:48.927: INFO: Pod "var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009533953s STEP: Saw pod success May 21 16:03:48.927: INFO: Pod "var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8" satisfied condition "Succeeded or Failed" May 21 16:03:48.930: INFO: Trying to get logs from node kali-worker pod var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8 container dapi-container: STEP: delete the pod May 21 16:03:48.945: INFO: Waiting for pod var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8 to disappear May 21 16:03:48.948: INFO: Pod var-expansion-9a3be38e-9f53-4581-b508-e5c1a17cfdf8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:48.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-205" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:42.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:03:43.338: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:03:45.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209823, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209823, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209823, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209823, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:03:48.361: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7196" for this suite. STEP: Destroying namespace "webhook-7196-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.121 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":34,"skipped":568,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:48.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8697/configmap-test-a1320a01-0a81-428d-822a-544f84fd60e4 STEP: Creating a pod to test consume configMaps May 21 16:03:48.205: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03" in namespace "configmap-8697" to be "Succeeded or Failed" May 21 16:03:48.208: INFO: Pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291292ms May 21 16:03:50.212: INFO: Pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0061395s May 21 16:03:52.215: INFO: Pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010002875s May 21 16:03:54.219: INFO: Pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013220993s STEP: Saw pod success May 21 16:03:54.219: INFO: Pod "pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03" satisfied condition "Succeeded or Failed" May 21 16:03:54.221: INFO: Trying to get logs from node kali-worker pod pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03 container env-test: STEP: delete the pod May 21 16:03:54.234: INFO: Waiting for pod pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03 to disappear May 21 16:03:54.237: INFO: Pod pod-configmaps-6b8d73d9-0838-46f4-b3e6-375a2f733b03 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:54.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8697" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":480,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:48.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 16:03:48.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66" in namespace "downward-api-9736" to be "Succeeded or Failed" May 21 16:03:49.002: INFO: Pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918349ms May 21 16:03:51.005: INFO: Pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006361812s May 21 16:03:53.009: INFO: Pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66": Phase="Running", Reason="", readiness=true. Elapsed: 4.010120158s May 21 16:03:55.012: INFO: Pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013682524s STEP: Saw pod success May 21 16:03:55.012: INFO: Pod "downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66" satisfied condition "Succeeded or Failed" May 21 16:03:55.016: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66 container client-container: STEP: delete the pod May 21 16:03:55.029: INFO: Waiting for pod downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66 to disappear May 21 16:03:55.031: INFO: Pod downwardapi-volume-facac7a1-60ae-4b17-823d-6f99fe7f3e66 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:55.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9736" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":480,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:49.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-77602221-1395-40c2-ad3a-a8ca2d6905aa STEP: Creating a pod to test consume secrets May 21 16:03:49.615: INFO: Waiting up to 5m0s for pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126" in namespace "secrets-9886" to be "Succeeded or Failed" May 21 16:03:49.618: INFO: Pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923325ms May 21 16:03:51.622: INFO: Pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007414439s May 21 16:03:53.627: INFO: Pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012122282s May 21 16:03:55.632: INFO: Pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016466869s STEP: Saw pod success May 21 16:03:55.632: INFO: Pod "pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126" satisfied condition "Succeeded or Failed" May 21 16:03:55.635: INFO: Trying to get logs from node kali-worker pod pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126 container secret-volume-test: STEP: delete the pod May 21 16:03:55.648: INFO: Waiting for pod pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126 to disappear May 21 16:03:55.651: INFO: Pod pod-secrets-86b6def9-cf9d-4b5a-83c1-2252f0fd2126 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:55.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9886" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:54.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 21 16:03:54.313: INFO: Waiting up to 5m0s for pod "client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756" in namespace "containers-6550" to be "Succeeded or Failed" May 21 16:03:54.317: INFO: Pod "client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814072ms May 21 16:03:56.321: INFO: Pod "client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007380932s STEP: Saw pod success May 21 16:03:56.321: INFO: Pod "client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756" satisfied condition "Succeeded or Failed" May 21 16:03:56.324: INFO: Trying to get logs from node kali-worker2 pod client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756 container test-container: STEP: delete the pod May 21 16:03:56.337: INFO: Waiting for pod client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756 to disappear May 21 16:03:56.339: INFO: Pod client-containers-e3e0fafe-cb02-439c-ba1c-846bf9ea5756 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:56.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6550" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":520,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:46.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0521 16:02:56.534176 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 16:03:58.549: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:03:58.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-469" for this suite. • [SLOW TEST:72.070 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":29,"skipped":560,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:42.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-615 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-615 STEP: creating replication controller externalsvc in namespace services-615 I0521 16:03:42.506430 33 runners.go:190] Created replication controller with name: externalsvc, namespace: services-615, replica count: 2 I0521 16:03:45.556941 33 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 21 16:03:45.576: INFO: Creating new exec pod May 21 16:03:47.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-615 exec execpod7k2xh -- /bin/sh -x -c nslookup nodeport-service.services-615.svc.cluster.local' May 21 16:03:48.164: INFO: stderr: "+ nslookup nodeport-service.services-615.svc.cluster.local\n" May 21 16:03:48.164: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-615.svc.cluster.local\tcanonical name = externalsvc.services-615.svc.cluster.local.\nName:\texternalsvc.services-615.svc.cluster.local\nAddress: 10.96.123.159\n\n" STEP: deleting ReplicationController externalsvc in namespace services-615, will wait for the garbage collector to delete the pods May 21 16:03:48.222: INFO: Deleting ReplicationController externalsvc took: 4.850219ms May 21 16:03:48.322: INFO: Terminating ReplicationController externalsvc pods took: 100.27707ms May 21 16:04:00.439: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:00.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-615" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:18.001 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":27,"skipped":682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:56.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:03:56.757: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:03:58.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209836, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209836, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209836, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209836, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:04:01.777: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:01.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9375" for this suite. STEP: Destroying namespace "webhook-9375-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.542 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":31,"skipped":526,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:01.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-2beb6358-d21a-4ebc-8412-21ab2126c9c8 STEP: Creating a pod to test consume configMaps May 21 16:04:01.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2" in namespace "projected-5662" to be "Succeeded or Failed" May 21 16:04:01.957: INFO: Pod "pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015382ms May 21 16:04:03.960: INFO: Pod "pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005018771s STEP: Saw pod success May 21 16:04:03.960: INFO: Pod "pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2" satisfied condition "Succeeded or Failed" May 21 16:04:03.962: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2 container projected-configmap-volume-test: STEP: delete the pod May 21 16:04:03.974: INFO: Waiting for pod pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2 to disappear May 21 16:04:03.976: INFO: Pod pod-projected-configmaps-7b47bb91-7444-45b0-8d28-b0c554be2ba2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:03.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5662" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":530,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:58.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 21 16:03:58.879: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:03:58.891: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 21 16:04:00.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209838, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209838, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209838, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209838, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:04:03.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:03.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8085" for this suite. STEP: Destroying namespace "webhook-8085-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":30,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:48.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:04.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9502" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:03.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:04:04.021: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:05.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9259" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":33,"skipped":534,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":17,"skipped":366,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:04.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 16:04:04.224: INFO: Waiting up to 5m0s for pod "downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355" in namespace "downward-api-3544" to be "Succeeded or Failed" May 21 16:04:04.226: INFO: Pod "downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400929ms May 21 16:04:06.230: INFO: Pod "downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006165904s STEP: Saw pod success May 21 16:04:06.230: INFO: Pod "downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355" satisfied condition "Succeeded or Failed" May 21 16:04:06.233: INFO: Trying to get logs from node kali-worker2 pod downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355 container dapi-container: STEP: delete the pod May 21 16:04:06.246: INFO: Waiting for pod downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355 to disappear May 21 16:04:06.248: INFO: Pod downward-api-fcd107f3-2441-48c6-b4da-36ccd741a355 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3544" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":366,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:04.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 21 16:04:08.088: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 16:04:08.091: INFO: Pod pod-with-prestop-http-hook still exists May 21 16:04:10.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 16:04:10.095: INFO: Pod pod-with-prestop-http-hook still exists May 21 16:04:12.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 16:04:12.095: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:12.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4677" for this suite. • [SLOW TEST:8.079 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":596,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:06.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 21 16:04:08.320: INFO: &Pod{ObjectMeta:{send-events-82eac4b0-9b37-4b50-adba-dc85c36fc9cb events-9749 /api/v1/namespaces/events-9749/pods/send-events-82eac4b0-9b37-4b50-adba-dc85c36fc9cb f49fa342-a04b-4ddb-9720-661bdfe19e13 27391 0 2021-05-21 16:04:06 +0000 UTC map[name:foo time:303610782] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.220" ], "mac": "f6:f5:d4:8e:6e:9a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.220" ], "mac": "f6:f5:d4:8e:6e:9a", "default": true, "dns": {} }]] [] [] [{e2e.test Update v1 2021-05-21 16:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-21 16:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-21 16:04:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.220\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cttc7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cttc7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cttc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-21 16:04:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.220,StartTime:2021-05-21 16:04:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-21 16:04:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://fefbb2048118fecf364ffe21b28d5b272f162ef857bea5b642e73143ffaeab9d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 21 16:04:10.324: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 21 16:04:12.328: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:12.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9749" for this suite. • [SLOW TEST:6.077 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":19,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:12.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-a527c0e4-274d-4161-9a7c-ab051d15af69 STEP: Creating a pod to test consume secrets May 21 16:04:12.152: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d" in namespace "projected-3324" to be "Succeeded or Failed" May 21 16:04:12.155: INFO: Pod "pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351057ms May 21 16:04:14.158: INFO: Pod "pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005958852s STEP: Saw pod success May 21 16:04:14.158: INFO: Pod "pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d" satisfied condition "Succeeded or Failed" May 21 16:04:14.160: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d container projected-secret-volume-test: STEP: delete the pod May 21 16:04:14.173: INFO: Waiting for pod pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d to disappear May 21 16:04:14.175: INFO: Pod pod-projected-secrets-5ca9b63a-2e82-46a4-afb3-6100b3eac96d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:14.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3324" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":597,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:05.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:04:05.243: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 16:04:09.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-277 --namespace=crd-publish-openapi-277 create -f -' May 21 16:04:09.667: INFO: stderr: "" May 21 16:04:09.667: INFO: stdout: "e2e-test-crd-publish-openapi-103-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 21 16:04:09.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-277 --namespace=crd-publish-openapi-277 delete e2e-test-crd-publish-openapi-103-crds test-cr' May 21 16:04:09.804: INFO: stderr: "" May 21 16:04:09.804: INFO: stdout: "e2e-test-crd-publish-openapi-103-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 21 16:04:09.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-277 --namespace=crd-publish-openapi-277 apply -f -' May 21 16:04:10.068: INFO: stderr: "" May 21 16:04:10.068: INFO: stdout: "e2e-test-crd-publish-openapi-103-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 21 16:04:10.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-277 --namespace=crd-publish-openapi-277 delete e2e-test-crd-publish-openapi-103-crds test-cr' May 21 16:04:10.193: INFO: stderr: "" May 21 16:04:10.193: INFO: stdout: "e2e-test-crd-publish-openapi-103-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 21 16:04:10.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-277 explain e2e-test-crd-publish-openapi-103-crds' May 21 16:04:10.472: INFO: stderr: "" May 21 16:04:10.472: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-103-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:14.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-277" for this suite. • [SLOW TEST:9.270 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:12.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-90bb1635-1cc4-40c1-a8cd-3f728b529cdc STEP: Creating a pod to test consume configMaps May 21 16:04:12.467: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c" in namespace "configmap-942" to be "Succeeded or Failed" May 21 16:04:12.469: INFO: Pod "pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313314ms May 21 16:04:14.472: INFO: Pod "pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005363847s STEP: Saw pod success May 21 16:04:14.472: INFO: Pod "pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c" satisfied condition "Succeeded or Failed" May 21 16:04:14.475: INFO: Trying to get logs from node kali-worker pod pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c container configmap-volume-test: STEP: delete the pod May 21 16:04:14.487: INFO: Waiting for pod pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c to disappear May 21 16:04:14.489: INFO: Pod pod-configmaps-ba698141-f6fc-4b5b-a5c0-e2b1f23b205c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:14.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-942" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":34,"skipped":550,"failed":0} [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:14.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:16.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2636" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":550,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:14.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:04:14.993: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 16:04:17.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209855, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209855, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209855, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757209854, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:04:20.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 21 16:04:20.039: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:20.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6417" for this suite. STEP: Destroying namespace "webhook-6417-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":21,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:16.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 21 16:04:17.170: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:04:17.181: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:04:20.198: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:20.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9355" for this suite. STEP: Destroying namespace "webhook-9355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":36,"skipped":551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:20.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 21 16:04:20.412: INFO: Created pod &Pod{ObjectMeta:{dns-3216 dns-3216 /api/v1/namespaces/dns-3216/pods/dns-3216 aeb7ed47-99a8-42fc-a903-ecd6d54936fa 27772 0 2021-05-21 16:04:20 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-05-21 16:04:20 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-djhll,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-djhll,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-djhll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 16:04:20.415: INFO: The status of Pod dns-3216 is Pending, waiting for it to be Running (with Ready = true) May 21 16:04:22.419: INFO: The status of Pod dns-3216 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 21 16:04:22.419: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3216 PodName:dns-3216 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:04:22.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 21 16:04:22.536: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3216 PodName:dns-3216 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 16:04:22.536: INFO: >>> kubeConfig: /root/.kube/config May 21 16:04:22.679: INFO: Deleting pod dns-3216... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:22.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3216" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":37,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:22.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-d214c541-48f2-47bd-8253-1b016509d021 STEP: Creating a pod to test consume configMaps May 21 16:04:22.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22" in namespace "configmap-5158" to be "Succeeded or Failed" May 21 16:04:22.841: INFO: Pod "pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589957ms May 21 16:04:24.845: INFO: Pod "pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006575457s STEP: Saw pod success May 21 16:04:24.845: INFO: Pod "pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22" satisfied condition "Succeeded or Failed" May 21 16:04:24.848: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22 container configmap-volume-test: STEP: delete the pod May 21 16:04:24.863: INFO: Waiting for pod pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22 to disappear May 21 16:04:24.866: INFO: Pod pod-configmaps-e8c5abab-f664-4d96-9995-37b305b26e22 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:24.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5158" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:20.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:22.204: INFO: DNS probes using dns-test-4c16605d-f184-4d02-815f-d3b7da00e303 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:24.242: INFO: DNS probes using dns-test-b83c2483-07b7-4540-af09-b42c9b493341 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4160.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4160.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:26.289: INFO: DNS probes using dns-test-78a1009e-4bf2-457c-889a-b44fbbb7c400 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:26.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4160" for this suite. • [SLOW TEST:6.176 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:24.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 21 16:04:24.995: INFO: Waiting up to 5m0s for pod "var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49" in namespace "var-expansion-2216" to be "Succeeded or Failed" May 21 16:04:24.998: INFO: Pod "var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809406ms May 21 16:04:27.002: INFO: Pod "var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006721376s STEP: Saw pod success May 21 16:04:27.002: INFO: Pod "var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49" satisfied condition "Succeeded or Failed" May 21 16:04:27.005: INFO: Trying to get logs from node kali-worker2 pod var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49 container dapi-container: STEP: delete the pod May 21 16:04:27.016: INFO: Waiting for pod var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49 to disappear May 21 16:04:27.019: INFO: Pod var-expansion-81309625-63ba-412e-8c01-f6b7ed5fef49 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:27.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2216" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":39,"skipped":718,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:55.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6441.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6441.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6441.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6441.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 72.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.72_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6441.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6441.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6441.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6441.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6441.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6441.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 72.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.72_udp@PTR;check="$$(dig +tcp +noall +answer +search 72.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.72_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:03.757: INFO: Unable to read wheezy_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.761: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.764: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.767: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.789: INFO: Unable to read jessie_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.795: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:03.818: INFO: Lookups using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 failed for: [wheezy_udp@dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_udp@dns-test-service.dns-6441.svc.cluster.local jessie_tcp@dns-test-service.dns-6441.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local] May 21 16:04:08.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.855: INFO: Unable to read jessie_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.861: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.864: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:08.884: INFO: Lookups using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 failed for: [wheezy_udp@dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_udp@dns-test-service.dns-6441.svc.cluster.local jessie_tcp@dns-test-service.dns-6441.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local] May 21 16:04:13.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.832: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.836: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.840: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.848: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: Get "https://172.30.13.89:46681/api/v1/namespaces/dns-6441/pods/dns-test-a123f203-c5ce-44ed-830a-a34037662724/proxy/results/wheezy_tcp@_http._tcp.test-service-2.dns-6441.svc.cluster.local": stream error: stream ID 1377; INTERNAL_ERROR May 21 16:04:13.860: INFO: Unable to read jessie_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.862: INFO: Unable to read jessie_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.865: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:13.885: INFO: Lookups using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 failed for: [wheezy_udp@dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6441.svc.cluster.local jessie_udp@dns-test-service.dns-6441.svc.cluster.local jessie_tcp@dns-test-service.dns-6441.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local] May 21 16:04:18.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.857: INFO: Unable to read jessie_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.864: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:18.888: INFO: Lookups using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 failed for: [wheezy_udp@dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_udp@dns-test-service.dns-6441.svc.cluster.local jessie_tcp@dns-test-service.dns-6441.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local] May 21 16:04:23.822: INFO: Unable to read wheezy_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.830: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.857: INFO: Unable to read jessie_udp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.861: INFO: Unable to read jessie_tcp@dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.864: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.868: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local from pod dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724: the server could not find the requested resource (get pods dns-test-a123f203-c5ce-44ed-830a-a34037662724) May 21 16:04:23.890: INFO: Lookups using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 failed for: [wheezy_udp@dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@dns-test-service.dns-6441.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_udp@dns-test-service.dns-6441.svc.cluster.local jessie_tcp@dns-test-service.dns-6441.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6441.svc.cluster.local] May 21 16:04:28.888: INFO: DNS probes using dns-6441/dns-test-a123f203-c5ce-44ed-830a-a34037662724 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:28.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6441" for this suite. • [SLOW TEST:33.222 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":36,"skipped":600,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:28.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-d7ca2344-ac1c-4ce3-b7b1-796b330b8ced [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:28.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8448" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":37,"skipped":605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:27.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7526.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7526.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7526.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7526.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7526.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:31.100: INFO: DNS probes using dns-7526/dns-test-2e41c0f1-e0af-4cb2-8af7-b2022f08ee03 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:31.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7526" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":720,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:29.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6039caa4-35c1-4cdd-a4be-f0ad8bfc3936 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6039caa4-35c1-4cdd-a4be-f0ad8bfc3936 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:33.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9751" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:31.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:35.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1331" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":739,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:35.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 21 16:04:35.280: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 21 16:04:35.286: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 21 16:04:35.286: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 21 16:04:35.296: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 21 16:04:35.296: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 21 16:04:35.305: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 21 16:04:35.305: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 21 16:04:42.331: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:42.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6044" for this suite. • [SLOW TEST:7.103 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":42,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:42.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-632" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":43,"skipped":809,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":22,"skipped":495,"failed":0} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:26.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-82zv STEP: Creating a pod to test atomic-volume-subpath May 21 16:04:26.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-82zv" in namespace "subpath-1380" to be "Succeeded or Failed" May 21 16:04:26.359: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619457ms May 21 16:04:28.363: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 2.006699692s May 21 16:04:30.367: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 4.01101496s May 21 16:04:32.371: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 6.015168674s May 21 16:04:34.375: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 8.019043434s May 21 16:04:36.379: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 10.022843152s May 21 16:04:38.383: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 12.026414641s May 21 16:04:40.386: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 14.029928033s May 21 16:04:42.390: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 16.033947317s May 21 16:04:44.394: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 18.03747132s May 21 16:04:46.398: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Running", Reason="", readiness=true. Elapsed: 20.041556186s May 21 16:04:48.402: INFO: Pod "pod-subpath-test-downwardapi-82zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.04541049s STEP: Saw pod success May 21 16:04:48.402: INFO: Pod "pod-subpath-test-downwardapi-82zv" satisfied condition "Succeeded or Failed" May 21 16:04:48.404: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-82zv container test-container-subpath-downwardapi-82zv: STEP: delete the pod May 21 16:04:48.418: INFO: Waiting for pod pod-subpath-test-downwardapi-82zv to disappear May 21 16:04:48.420: INFO: Pod pod-subpath-test-downwardapi-82zv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-82zv May 21 16:04:48.420: INFO: Deleting pod "pod-subpath-test-downwardapi-82zv" in namespace "subpath-1380" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:48.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1380" for this suite. • [SLOW TEST:22.113 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":495,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:48.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-06078294-f685-4e36-bd12-dc8cea975d3f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:50.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9516" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":501,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:14.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0521 16:03:54.218441 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 16:04:56.237: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 21 16:04:56.237: INFO: Deleting pod "simpletest.rc-5zwnk" in namespace "gc-8133" May 21 16:04:56.246: INFO: Deleting pod "simpletest.rc-8ldpr" in namespace "gc-8133" May 21 16:04:56.253: INFO: Deleting pod "simpletest.rc-987l5" in namespace "gc-8133" May 21 16:04:56.260: INFO: Deleting pod "simpletest.rc-9mjqx" in namespace "gc-8133" May 21 16:04:56.267: INFO: Deleting pod "simpletest.rc-9sjs2" in namespace "gc-8133" May 21 16:04:56.274: INFO: Deleting pod "simpletest.rc-fllk7" in namespace "gc-8133" May 21 16:04:56.280: INFO: Deleting pod "simpletest.rc-fz9sm" in namespace "gc-8133" May 21 16:04:56.287: INFO: Deleting pod "simpletest.rc-hkvjv" in namespace "gc-8133" May 21 16:04:56.293: INFO: Deleting pod "simpletest.rc-qv6m2" in namespace "gc-8133" May 21 16:04:56.300: INFO: Deleting pod "simpletest.rc-wxb86" in namespace "gc-8133" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:56.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8133" for this suite. • [SLOW TEST:102.155 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":17,"skipped":194,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:50.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:04:58.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-587" for this suite. • [SLOW TEST:8.049 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":25,"skipped":513,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:42.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1212 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-1212 May 21 16:04:42.592: INFO: Found 0 stateful pods, waiting for 1 May 21 16:04:52.596: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 16:04:52.617: INFO: Deleting all statefulset in ns statefulset-1212 May 21 16:04:52.620: INFO: Scaling statefulset ss to 0 May 21 16:05:02.643: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:05:02.646: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:02.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1212" for this suite. • [SLOW TEST:20.116 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":44,"skipped":813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:02.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 16:05:03.278: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 16:05:06.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:06.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9973" for this suite. STEP: Destroying namespace "webhook-9973-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":45,"skipped":843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:06.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 21 16:05:06.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4701 cluster-info' May 21 16:05:06.572: INFO: stderr: "" May 21 16:05:06.572: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.13.89:46681\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:06.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4701" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":-1,"completed":46,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:33.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6327.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 16:04:37.225: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.228: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.232: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.235: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.245: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.249: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.252: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.256: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:37.263: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:04:42.271: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.274: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.277: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.281: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.291: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.294: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.298: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.301: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:42.308: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:04:47.268: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.272: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.276: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.280: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.292: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.295: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.300: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.303: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:47.311: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:04:52.268: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.272: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.276: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.279: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.291: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.294: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.298: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.302: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:52.311: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:04:57.269: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.274: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.277: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.281: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.292: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.296: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.300: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.304: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:04:57.311: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:05:02.268: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.273: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.276: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.280: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.292: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.296: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.300: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.304: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local from pod dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546: the server could not find the requested resource (get pods dns-test-287f9d36-8a7a-4516-86e2-261fada4c546) May 21 16:05:02.311: INFO: Lookups using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6327.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6327.svc.cluster.local jessie_udp@dns-test-service-2.dns-6327.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6327.svc.cluster.local] May 21 16:05:07.312: INFO: DNS probes using dns-6327/dns-test-287f9d36-8a7a-4516-86e2-261fada4c546 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:07.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6327" for this suite. • [SLOW TEST:34.175 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":39,"skipped":663,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:06.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-4e05d703-8f36-40c9-9a86-de1a3e3a4373 STEP: Creating a pod to test consume secrets May 21 16:05:06.666: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7" in namespace "projected-8535" to be "Succeeded or Failed" May 21 16:05:06.669: INFO: Pod "pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797549ms May 21 16:05:08.672: INFO: Pod "pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006498099s STEP: Saw pod success May 21 16:05:08.672: INFO: Pod "pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7" satisfied condition "Succeeded or Failed" May 21 16:05:08.676: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7 container projected-secret-volume-test: STEP: delete the pod May 21 16:05:08.692: INFO: Waiting for pod pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7 to disappear May 21 16:05:08.696: INFO: Pod pod-projected-secrets-f8abf1ea-24e3-4c38-83f1-66e19a030ff7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:08.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8535" for this suite. • ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:07.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 21 16:05:09.392: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:09.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9916" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":664,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:09.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-432" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":41,"skipped":668,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:14.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:14.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5211" for this suite. • [SLOW TEST:60.049 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:58.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:21.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5619" for this suite. • [SLOW TEST:23.214 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":532,"failed":0} SSSSSSS ------------------------------ May 21 16:05:21.849: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:12.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-437 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 21 16:03:12.941: INFO: Found 0 stateful pods, waiting for 3 May 21 16:03:22.945: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 16:03:22.945: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 16:03:22.945: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 21 16:03:22.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-437 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:03:23.213: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:03:23.213: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:03:23.213: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 21 16:03:33.245: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 21 16:03:43.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-437 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:03:43.506: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:03:43.506: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:03:43.506: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:03:53.526: INFO: Waiting for StatefulSet statefulset-437/ss2 to complete update May 21 16:03:53.526: INFO: Waiting for Pod statefulset-437/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 16:03:53.526: INFO: Waiting for Pod statefulset-437/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 16:04:03.534: INFO: Waiting for StatefulSet statefulset-437/ss2 to complete update May 21 16:04:03.534: INFO: Waiting for Pod statefulset-437/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 21 16:04:13.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-437 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 16:04:13.799: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 21 16:04:13.799: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 16:04:13.799: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 16:04:23.833: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 21 16:04:33.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=statefulset-437 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 16:04:34.099: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 21 16:04:34.099: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 16:04:34.099: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 16:04:54.119: INFO: Waiting for StatefulSet statefulset-437/ss2 to complete update May 21 16:04:54.119: INFO: Waiting for Pod statefulset-437/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 16:05:04.128: INFO: Deleting all statefulset in ns statefulset-437 May 21 16:05:04.132: INFO: Scaling statefulset ss2 to 0 May 21 16:05:24.150: INFO: Waiting for statefulset status.replicas updated to 0 May 21 16:05:24.153: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:24.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-437" for this suite. • [SLOW TEST:131.272 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":7,"skipped":211,"failed":0} May 21 16:05:24.175: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:00.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-edf94c98-5bf8-47c1-ba08-f02af27fd315 STEP: Creating configMap with name cm-test-opt-upd-3a4ac8c1-5f5f-410b-9148-4195f2521813 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-edf94c98-5bf8-47c1-ba08-f02af27fd315 STEP: Updating configmap cm-test-opt-upd-3a4ac8c1-5f5f-410b-9148-4195f2521813 STEP: Creating configMap with name cm-test-opt-create-9e30030d-4126-41d9-bcec-8c6c8f435342 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:28.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7190" for this suite. • [SLOW TEST:88.460 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":706,"failed":0} May 21 16:05:28.966: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":901,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:08.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7725 May 21 16:05:10.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 21 16:05:11.021: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 21 16:05:11.021: INFO: stdout: "iptables" May 21 16:05:11.021: INFO: proxyMode: iptables May 21 16:05:11.027: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:11.030: INFO: Pod kube-proxy-mode-detector still exists May 21 16:05:13.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:13.034: INFO: Pod kube-proxy-mode-detector still exists May 21 16:05:15.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:15.034: INFO: Pod kube-proxy-mode-detector still exists May 21 16:05:17.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:17.034: INFO: Pod kube-proxy-mode-detector still exists May 21 16:05:19.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:19.034: INFO: Pod kube-proxy-mode-detector still exists May 21 16:05:21.030: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 21 16:05:21.034: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7725 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7725 I0521 16:05:21.050117 24 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7725, replica count: 3 I0521 16:05:24.100631 24 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 16:05:24.112: INFO: Creating new exec pod May 21 16:05:27.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 21 16:05:27.440: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 21 16:05:27.440: INFO: stdout: "" May 21 16:05:27.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c nc -zv -t -w 2 10.96.187.44 80' May 21 16:05:27.696: INFO: stderr: "+ nc -zv -t -w 2 10.96.187.44 80\nConnection to 10.96.187.44 80 port [tcp/http] succeeded!\n" May 21 16:05:27.696: INFO: stdout: "" May 21 16:05:27.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.2 32170' May 21 16:05:27.940: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.2 32170\nConnection to 172.18.0.2 32170 port [tcp/32170] succeeded!\n" May 21 16:05:27.940: INFO: stdout: "" May 21 16:05:27.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 32170' May 21 16:05:28.199: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 32170\nConnection to 172.18.0.4 32170 port [tcp/32170] succeeded!\n" May 21 16:05:28.199: INFO: stdout: "" May 21 16:05:28.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.2:32170/ ; done' May 21 16:05:28.578: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n" May 21 16:05:28.578: INFO: stdout: "\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6\naffinity-nodeport-timeout-fpzt6" May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Received response from host: affinity-nodeport-timeout-fpzt6 May 21 16:05:28.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.2:32170/' May 21 16:05:28.840: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n" May 21 16:05:28.840: INFO: stdout: "affinity-nodeport-timeout-fpzt6" May 21 16:05:43.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=services-7725 exec execpod-affinity6lxzm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.2:32170/' May 21 16:05:44.129: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.2:32170/\n" May 21 16:05:44.129: INFO: stdout: "affinity-nodeport-timeout-7pdpm" May 21 16:05:44.129: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7725, will wait for the garbage collector to delete the pods May 21 16:05:44.204: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.166121ms May 21 16:05:44.304: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.263008ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:50.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7725" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:41.826 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":48,"skipped":901,"failed":0} May 21 16:05:50.536: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:55.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:05:55.110: INFO: Deleting pod "var-expansion-adef1bf7-12b4-4c60-a013-4e72f4deb7de" in namespace "var-expansion-9483" May 21 16:05:55.115: INFO: Wait up to 5m0s for pod "var-expansion-adef1bf7-12b4-4c60-a013-4e72f4deb7de" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:05:57.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9483" for this suite. • [SLOW TEST:122.064 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":38,"skipped":496,"failed":0} May 21 16:05:57.136: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:04:56.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-5ff6191d-88c8-4a1c-9b63-d7892a009e22 STEP: Creating configMap with name cm-test-opt-upd-70db0470-1ebc-414d-b7c0-db6a653352c7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5ff6191d-88c8-4a1c-9b63-d7892a009e22 STEP: Updating configmap cm-test-opt-upd-70db0470-1ebc-414d-b7c0-db6a653352c7 STEP: Creating configMap with name cm-test-opt-create-24c58037-a933-496c-971e-6dbe30f83670 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:06:06.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6611" for this suite. • [SLOW TEST:70.389 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":210,"failed":0} May 21 16:06:06.733: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:02:03.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-bdd3fde4-4d4b-4a20-9526-83c4e145fead in namespace container-probe-327 May 21 16:02:07.315: INFO: Started pod liveness-bdd3fde4-4d4b-4a20-9526-83c4e145fead in namespace container-probe-327 STEP: checking the pod's current state and verifying that restartCount is present May 21 16:02:07.318: INFO: Initial restart count of pod liveness-bdd3fde4-4d4b-4a20-9526-83c4e145fead is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:06:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-327" for this suite. • [SLOW TEST:244.528 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:03:40.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-be842122-e3a6-4da1-b7f5-09083dafe29a in namespace container-probe-8135 May 21 16:03:42.949: INFO: Started pod liveness-be842122-e3a6-4da1-b7f5-09083dafe29a in namespace container-probe-8135 STEP: checking the pod's current state and verifying that restartCount is present May 21 16:03:42.952: INFO: Initial restart count of pod liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is 0 May 21 16:04:00.986: INFO: Restart count of pod container-probe-8135/liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is now 1 (18.034733779s elapsed) May 21 16:04:21.024: INFO: Restart count of pod container-probe-8135/liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is now 2 (38.071924688s elapsed) May 21 16:04:41.062: INFO: Restart count of pod container-probe-8135/liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is now 3 (58.110727126s elapsed) May 21 16:05:01.102: INFO: Restart count of pod container-probe-8135/liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is now 4 (1m18.150535824s elapsed) May 21 16:06:11.230: INFO: Restart count of pod container-probe-8135/liveness-be842122-e3a6-4da1-b7f5-09083dafe29a is now 5 (2m28.278251941s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:06:11.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8135" for this suite. • [SLOW TEST:150.337 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":407,"failed":0} May 21 16:06:11.249: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:09.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0521 16:05:10.608307 20 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 21 16:06:12.628: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:06:12.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3187" for this suite. • [SLOW TEST:63.111 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":42,"skipped":675,"failed":0} May 21 16:06:12.639: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:05:14.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:05:14.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 21 16:05:14.951: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:14Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:14Z]] name:name1 resourceVersion:29325 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:da932d4b-2856-478c-8fa4-e07b897ea1d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 21 16:05:24.958: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:24Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:24Z]] name:name2 resourceVersion:29483 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:860e7716-a6d3-40fb-b108-ea0f1115eca4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 21 16:05:34.967: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:34Z]] name:name1 resourceVersion:29643 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:da932d4b-2856-478c-8fa4-e07b897ea1d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 21 16:05:44.974: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:44Z]] name:name2 resourceVersion:29702 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:860e7716-a6d3-40fb-b108-ea0f1115eca4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 21 16:05:54.982: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:14Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:34Z]] name:name1 resourceVersion:29755 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:da932d4b-2856-478c-8fa4-e07b897ea1d4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 21 16:06:04.990: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-21T16:05:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-21T16:05:44Z]] name:name2 resourceVersion:29839 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:860e7716-a6d3-40fb-b108-ea0f1115eca4] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:06:15.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5000" for this suite. • [SLOW TEST:61.168 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":34,"skipped":657,"failed":0} May 21 16:06:15.516: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":396,"failed":0} May 21 16:06:07.803: INFO: Running AfterSuite actions on all nodes May 21 16:06:15.555: INFO: Running AfterSuite actions on node 1 May 21 16:06:15.555: INFO: Skipping dumping logs from cluster Ran 286 of 5484 Specs in 507.016 seconds SUCCESS! -- 286 Passed | 0 Failed | 0 Pending | 5198 Skipped Ginkgo ran 1 suite in 8m28.748091303s Test Suite Passed