I0904 13:03:36.604675 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0904 13:03:36.604853 7 e2e.go:129] Starting e2e run "54d8b692-ad73-47a4-be0a-850bae8fa01e" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1599224615 - Will randomize all specs Will run 303 of 5232 specs Sep 4 13:03:36.660: INFO: >>> kubeConfig: /root/.kube/config Sep 4 13:03:36.664: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 4 13:03:36.686: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 4 13:03:36.718: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 4 13:03:36.718: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 4 13:03:36.718: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 4 13:03:36.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 4 13:03:36.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 4 13:03:36.726: INFO: e2e test version: v1.19.1-rc.0 Sep 4 13:03:36.727: INFO: kube-apiserver version: v1.19.0-rc.1 Sep 4 13:03:36.727: INFO: >>> kubeConfig: /root/.kube/config Sep 4 13:03:36.732: INFO: Cluster IP family: ipv4 S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:03:36.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Sep 4 13:03:36.819: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:03:42.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1255" for this suite. • [SLOW TEST:6.138 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":1,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:03:42.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 4 13:03:43.040: INFO: Waiting up to 5m0s for pod "pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41" in namespace "emptydir-7361" to be "Succeeded or Failed" Sep 4 13:03:43.057: INFO: Pod "pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41": Phase="Pending", Reason="", readiness=false. Elapsed: 17.115156ms Sep 4 13:03:45.061: INFO: Pod "pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02094326s Sep 4 13:03:47.065: INFO: Pod "pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025094172s STEP: Saw pod success Sep 4 13:03:47.065: INFO: Pod "pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41" satisfied condition "Succeeded or Failed" Sep 4 13:03:47.068: INFO: Trying to get logs from node latest-worker pod pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41 container test-container: STEP: delete the pod Sep 4 13:03:47.274: INFO: Waiting for pod pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41 to disappear Sep 4 13:03:47.345: INFO: Pod pod-57f71f2e-f9bc-439f-adb7-8c297f7b1e41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:03:47.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7361" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:03:47.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 4 13:03:47.572: INFO: Waiting up to 5m0s for pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385" in namespace "var-expansion-7689" to be "Succeeded or Failed" Sep 4 13:03:47.574: INFO: Pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070861ms Sep 4 13:03:49.578: INFO: Pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006253114s Sep 4 13:03:51.582: INFO: Pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385": Phase="Running", Reason="", readiness=true. Elapsed: 4.010160954s Sep 4 13:03:53.586: INFO: Pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014459521s STEP: Saw pod success Sep 4 13:03:53.586: INFO: Pod "var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385" satisfied condition "Succeeded or Failed" Sep 4 13:03:53.589: INFO: Trying to get logs from node latest-worker pod var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385 container dapi-container: STEP: delete the pod Sep 4 13:03:53.645: INFO: Waiting for pod var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385 to disappear Sep 4 13:03:53.682: INFO: Pod var-expansion-bea348e1-141c-4ec8-a63c-f94a67bf9385 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:03:53.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7689" for this suite. • [SLOW TEST:6.366 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":41,"failed":0} [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:03:53.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:03:54.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-103" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":4,"skipped":41,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:03:54.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 4 13:04:00.895: INFO: Successfully updated pod "labelsupdatef0dd379a-5f26-4ddb-b736-0df97b1e89ab" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:02.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5586" for this suite. • [SLOW TEST:8.941 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":5,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:02.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-0571a862-2964-4523-a0a5-0e40825dd9f5 STEP: Creating a pod to test consume secrets Sep 4 13:04:03.061: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede" in namespace "projected-8651" to be "Succeeded or Failed" Sep 4 13:04:03.111: INFO: Pod "pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede": Phase="Pending", Reason="", readiness=false. Elapsed: 49.578251ms Sep 4 13:04:05.410: INFO: Pod "pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348733367s Sep 4 13:04:07.414: INFO: Pod "pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.353057332s STEP: Saw pod success Sep 4 13:04:07.414: INFO: Pod "pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede" satisfied condition "Succeeded or Failed" Sep 4 13:04:07.419: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede container projected-secret-volume-test: STEP: delete the pod Sep 4 13:04:07.497: INFO: Waiting for pod pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede to disappear Sep 4 13:04:07.514: INFO: Pod pod-projected-secrets-7278caf0-228b-423a-a286-75c0f95f3ede no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:07.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8651" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":69,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:07.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9899 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9899 to expose endpoints map[] Sep 4 13:04:07.654: INFO: successfully validated that service multi-endpoint-test in namespace services-9899 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9899 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9899 to expose endpoints map[pod1:[100]] Sep 4 13:04:11.699: INFO: successfully validated that service multi-endpoint-test in namespace services-9899 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9899 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9899 to expose endpoints map[pod1:[100] pod2:[101]] Sep 4 13:04:15.809: INFO: Unexpected endpoints: found map[113a1582-cb05-4419-99b1-50b9a84fa5f0:[100]], expected map[pod1:[100] pod2:[101]], will retry Sep 4 13:04:16.894: INFO: successfully validated that service multi-endpoint-test in namespace services-9899 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9899 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9899 to expose endpoints map[pod2:[101]] Sep 4 13:04:17.054: INFO: successfully validated that service multi-endpoint-test in namespace services-9899 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9899 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9899 to expose endpoints map[] Sep 4 13:04:17.828: INFO: successfully validated that service multi-endpoint-test in namespace services-9899 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:18.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9899" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.742 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":7,"skipped":84,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:19.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 4 13:04:19.817: INFO: >>> kubeConfig: /root/.kube/config Sep 4 13:04:21.919: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:35.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7266" for this suite. • [SLOW TEST:16.198 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":8,"skipped":84,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:35.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 4 13:04:35.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f -' Sep 4 13:04:41.101: INFO: stderr: "" Sep 4 13:04:41.101: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 4 13:04:41.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config diff -f -' Sep 4 13:04:41.678: INFO: rc: 1 Sep 4 13:04:41.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete -f -' Sep 4 13:04:41.796: INFO: stderr: "" Sep 4 13:04:41.796: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:41.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5722" for this suite. • [SLOW TEST:6.407 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":9,"skipped":87,"failed":0} SSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:41.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:42.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8248" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":10,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:42.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 4 13:04:48.136: INFO: &Pod{ObjectMeta:{send-events-6044df4e-2711-42dc-9d3c-abc740c030f6 events-699 /api/v1/namespaces/events-699/pods/send-events-6044df4e-2711-42dc-9d3c-abc740c030f6 38e6a9db-0fd4-4d93-9933-e585f5e92a90 6797596 0 2020-09-04 13:04:42 +0000 UTC map[name:foo time:119998250] map[] [] [] [{e2e.test Update v1 2020-09-04 13:04:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:04:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t4g4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t4g4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t4g4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:04:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:04:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.123,StartTime:2020-09-04 13:04:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:04:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://b2969d5c1e9e180cdcc3b4b99a8ae3fdfaa00a2f3ff2e63f3f6cf4f9285b196b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 4 13:04:50.142: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 4 13:04:52.146: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:04:52.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-699" for this suite. • [SLOW TEST:10.176 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":11,"skipped":124,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:04:52.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:04:52.269: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 4 13:04:55.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' Sep 4 13:04:59.068: INFO: stderr: "" Sep 4 13:04:59.068: INFO: stdout: "e2e-test-crd-publish-openapi-4438-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 4 13:04:59.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 delete e2e-test-crd-publish-openapi-4438-crds test-foo' Sep 4 13:04:59.206: INFO: stderr: "" Sep 4 13:04:59.206: INFO: stdout: "e2e-test-crd-publish-openapi-4438-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 4 13:04:59.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' Sep 4 13:04:59.563: INFO: stderr: "" Sep 4 13:04:59.563: INFO: stdout: "e2e-test-crd-publish-openapi-4438-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 4 13:04:59.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 delete e2e-test-crd-publish-openapi-4438-crds test-foo' Sep 4 13:04:59.686: INFO: stderr: "" Sep 4 13:04:59.686: INFO: stdout: "e2e-test-crd-publish-openapi-4438-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 4 13:04:59.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' Sep 4 13:04:59.993: INFO: rc: 1 Sep 4 13:04:59.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' Sep 4 13:05:00.303: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 4 13:05:00.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 create -f -' Sep 4 13:05:00.600: INFO: rc: 1 Sep 4 13:05:00.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8509 apply -f -' Sep 4 13:05:00.931: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 4 13:05:00.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4438-crds' Sep 4 13:05:01.256: INFO: stderr: "" Sep 4 13:05:01.256: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4438-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 4 13:05:01.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4438-crds.metadata' Sep 4 13:05:01.625: INFO: stderr: "" Sep 4 13:05:01.625: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4438-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 4 13:05:01.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4438-crds.spec' Sep 4 13:05:01.965: INFO: stderr: "" Sep 4 13:05:01.965: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4438-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 4 13:05:01.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4438-crds.spec.bars' Sep 4 13:05:02.278: INFO: stderr: "" Sep 4 13:05:02.278: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4438-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 4 13:05:02.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4438-crds.spec.bars2' Sep 4 13:05:02.571: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:05:04.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8509" for this suite. • [SLOW TEST:12.412 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":12,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:05:04.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:05:04.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf" in namespace "projected-243" to be "Succeeded or Failed" Sep 4 13:05:04.757: INFO: Pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.503927ms Sep 4 13:05:07.724: INFO: Pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974937817s Sep 4 13:05:09.746: INFO: Pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.997288684s Sep 4 13:05:11.836: INFO: Pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.087358454s STEP: Saw pod success Sep 4 13:05:11.836: INFO: Pod "downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf" satisfied condition "Succeeded or Failed" Sep 4 13:05:11.839: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf container client-container: STEP: delete the pod Sep 4 13:05:12.034: INFO: Waiting for pod downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf to disappear Sep 4 13:05:12.074: INFO: Pod downwardapi-volume-bd5c5121-5ed1-4539-9706-ab8a28171fcf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:05:12.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-243" for this suite. • [SLOW TEST:7.452 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:05:12.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8112 to expose endpoints map[] Sep 4 13:05:12.900: INFO: successfully validated that service endpoint-test2 in namespace services-8112 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8112 to expose endpoints map[pod1:[80]] Sep 4 13:05:16.988: INFO: successfully validated that service endpoint-test2 in namespace services-8112 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-8112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8112 to expose endpoints map[pod1:[80] pod2:[80]] Sep 4 13:05:21.124: INFO: Unexpected endpoints: found map[15b71fde-f220-41ad-b15b-aa3ca874be55:[80]], expected map[pod1:[80] pod2:[80]], will retry Sep 4 13:05:23.891: INFO: successfully validated that service endpoint-test2 in namespace services-8112 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-8112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8112 to expose endpoints map[pod2:[80]] Sep 4 13:05:24.053: INFO: successfully validated that service endpoint-test2 in namespace services-8112 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-8112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8112 to expose endpoints map[] Sep 4 13:05:25.083: INFO: successfully validated that service endpoint-test2 in namespace services-8112 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:05:26.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8112" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.039 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":14,"skipped":211,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:05:26.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 4 13:05:27.987: INFO: created pod pod-service-account-defaultsa Sep 4 13:05:27.987: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 4 13:05:28.076: INFO: created pod pod-service-account-mountsa Sep 4 13:05:28.076: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 4 13:05:28.087: INFO: created pod pod-service-account-nomountsa Sep 4 13:05:28.087: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 4 13:05:28.190: INFO: created pod pod-service-account-defaultsa-mountspec Sep 4 13:05:28.190: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 4 13:05:28.203: INFO: created pod pod-service-account-mountsa-mountspec Sep 4 13:05:28.203: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 4 13:05:28.257: INFO: created pod pod-service-account-nomountsa-mountspec Sep 4 13:05:28.257: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 4 13:05:28.321: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 4 13:05:28.321: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 4 13:05:28.349: INFO: created pod pod-service-account-mountsa-nomountspec Sep 4 13:05:28.349: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 4 13:05:28.501: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 4 13:05:28.501: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:05:28.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9944" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":15,"skipped":214,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:05:28.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:05:28.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d" in namespace "downward-api-4637" to be "Succeeded or Failed" Sep 4 13:05:28.890: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.812009ms Sep 4 13:05:30.920: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056643566s Sep 4 13:05:33.010: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147090564s Sep 4 13:05:35.472: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608972092s Sep 4 13:05:37.951: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.087944946s Sep 4 13:05:40.082: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.219004337s Sep 4 13:05:42.477: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.61384774s Sep 4 13:05:44.812: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.948878773s Sep 4 13:05:46.841: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Running", Reason="", readiness=true. Elapsed: 17.977728468s Sep 4 13:05:48.844: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.980467104s STEP: Saw pod success Sep 4 13:05:48.844: INFO: Pod "downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d" satisfied condition "Succeeded or Failed" Sep 4 13:05:48.845: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d container client-container: STEP: delete the pod Sep 4 13:05:49.150: INFO: Waiting for pod downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d to disappear Sep 4 13:05:49.219: INFO: Pod downwardapi-volume-dac19fc0-c774-4a50-8246-2df29341d51d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:05:49.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4637" for this suite. • [SLOW TEST:21.745 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":218,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:05:50.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:06:01.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2732" for this suite. • [SLOW TEST:10.749 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":234,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:06:01.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-04e1413f-9209-4f0f-8d6d-b054500cde0c [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:06:01.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-710" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":18,"skipped":239,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:06:01.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-54c82363-e048-4ec8-a833-589d7539128d in namespace container-probe-1497 Sep 4 13:06:07.368: INFO: Started pod busybox-54c82363-e048-4ec8-a833-589d7539128d in namespace container-probe-1497 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 13:06:07.580: INFO: Initial restart count of pod busybox-54c82363-e048-4ec8-a833-589d7539128d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:08.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1497" for this suite. • [SLOW TEST:246.856 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":249,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:08.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:35.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8109" for this suite. • [SLOW TEST:27.373 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":20,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:35.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 4 13:10:35.503: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2491" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":21,"skipped":283,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:35.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-880082cb-edf8-4f4f-9fff-dacbd8647df7 STEP: Creating a pod to test consume configMaps Sep 4 13:10:35.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112" in namespace "projected-8534" to be "Succeeded or Failed" Sep 4 13:10:35.717: INFO: Pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112": Phase="Pending", Reason="", readiness=false. Elapsed: 39.993005ms Sep 4 13:10:38.585: INFO: Pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907964129s Sep 4 13:10:40.601: INFO: Pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112": Phase="Running", Reason="", readiness=true. Elapsed: 4.92446654s Sep 4 13:10:42.818: INFO: Pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.141177136s STEP: Saw pod success Sep 4 13:10:42.818: INFO: Pod "pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112" satisfied condition "Succeeded or Failed" Sep 4 13:10:42.820: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112 container projected-configmap-volume-test: STEP: delete the pod Sep 4 13:10:43.346: INFO: Waiting for pod pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112 to disappear Sep 4 13:10:43.914: INFO: Pod pod-projected-configmaps-ba041b17-7259-40af-abf5-6bfa59dba112 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:43.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8534" for this suite. • [SLOW TEST:8.365 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":283,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:43.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:44.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7708" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":318,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:44.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:10:44.412: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed" in namespace "security-context-test-7583" to be "Succeeded or Failed" Sep 4 13:10:44.432: INFO: Pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed": Phase="Pending", Reason="", readiness=false. Elapsed: 20.228903ms Sep 4 13:10:46.477: INFO: Pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064786727s Sep 4 13:10:48.480: INFO: Pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed": Phase="Running", Reason="", readiness=true. Elapsed: 4.067836892s Sep 4 13:10:50.483: INFO: Pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071382073s Sep 4 13:10:50.483: INFO: Pod "busybox-readonly-false-ffd823d7-da0e-41cd-862e-f995327411ed" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:10:50.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7583" for this suite. • [SLOW TEST:6.264 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":320,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:10:50.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 4 13:10:50.554: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 4 13:10:51.473: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 4 13:10:55.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:10:57.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:10:59.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:11:01.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821851, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:11:04.430: INFO: Waited 720.878485ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:04.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6734" for this suite. • [SLOW TEST:14.596 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":25,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:05.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:11:08.531: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:11:10.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821868, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:11:12.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821868, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734821867, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:11:15.681: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:15.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-575" for this suite. STEP: Destroying namespace "webhook-575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.942 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":26,"skipped":364,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:16.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:11:16.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913" in namespace "downward-api-3879" to be "Succeeded or Failed" Sep 4 13:11:16.102: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11656ms Sep 4 13:11:18.106: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007903897s Sep 4 13:11:20.687: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 4.588849903s Sep 4 13:11:23.582: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 7.483899485s Sep 4 13:11:25.586: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 9.487291865s Sep 4 13:11:28.351: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 12.252637622s Sep 4 13:11:30.676: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 14.578280364s Sep 4 13:11:32.680: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 16.582242554s Sep 4 13:11:35.974: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Pending", Reason="", readiness=false. Elapsed: 19.875537908s Sep 4 13:11:37.979: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Running", Reason="", readiness=true. Elapsed: 21.880890077s Sep 4 13:11:39.983: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.884578505s STEP: Saw pod success Sep 4 13:11:39.983: INFO: Pod "downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913" satisfied condition "Succeeded or Failed" Sep 4 13:11:39.985: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913 container client-container: STEP: delete the pod Sep 4 13:11:40.058: INFO: Waiting for pod downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913 to disappear Sep 4 13:11:40.067: INFO: Pod downwardapi-volume-60bfe7be-e36a-4ed0-9705-900246f9a913 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3879" for this suite. • [SLOW TEST:24.047 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":375,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:40.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-d736503d-84cb-47d1-9a59-2774c2d86359 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:40.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1622" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":28,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:40.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 4 13:11:45.796: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:46.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2673" for this suite. • [SLOW TEST:5.928 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":412,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:46.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:11:46.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b" in namespace "projected-9704" to be "Succeeded or Failed" Sep 4 13:11:46.447: INFO: Pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 158.006076ms Sep 4 13:11:48.578: INFO: Pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289433562s Sep 4 13:11:51.339: INFO: Pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b": Phase="Running", Reason="", readiness=true. Elapsed: 5.050087071s Sep 4 13:11:53.343: INFO: Pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.053596856s STEP: Saw pod success Sep 4 13:11:53.343: INFO: Pod "downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b" satisfied condition "Succeeded or Failed" Sep 4 13:11:53.345: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b container client-container: STEP: delete the pod Sep 4 13:11:53.377: INFO: Waiting for pod downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b to disappear Sep 4 13:11:53.385: INFO: Pod downwardapi-volume-34f73575-0462-4dfa-81fc-f57c130d4a4b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:11:53.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9704" for this suite. • [SLOW TEST:7.259 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":425,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:11:53.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:11:53.516: INFO: Creating deployment "webserver-deployment" Sep 4 13:11:53.530: INFO: Waiting for observed generation 1 Sep 4 13:11:56.504: INFO: Waiting for all required pods to come up Sep 4 13:11:56.510: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 4 13:12:10.656: INFO: Waiting for deployment "webserver-deployment" to complete Sep 4 13:12:10.661: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 4 13:12:10.667: INFO: Updating deployment webserver-deployment Sep 4 13:12:10.667: INFO: Waiting for observed generation 2 Sep 4 13:12:14.191: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 4 13:12:18.693: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 4 13:12:19.783: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 4 13:12:21.325: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 4 13:12:21.325: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 4 13:12:22.139: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 4 13:12:22.619: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 4 13:12:22.619: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 4 13:12:22.626: INFO: Updating deployment webserver-deployment Sep 4 13:12:22.626: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 4 13:12:22.801: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 4 13:12:22.899: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 4 13:12:23.152: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4878 /apis/apps/v1/namespaces/deployment-4878/deployments/webserver-deployment 2b6f33a0-cf90-412f-84b3-450d9926f173 6800075 3 2020-09-04 13:11:53 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0032134f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-04 13:12:15 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-04 13:12:22 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 4 13:12:23.217: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-4878 /apis/apps/v1/namespaces/deployment-4878/replicasets/webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 6800108 3 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2b6f33a0-cf90-412f-84b3-450d9926f173 0xc003213bb7 0xc003213bb8}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b6f33a0-cf90-412f-84b3-450d9926f173\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003213dc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:12:23.217: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 4 13:12:23.217: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-4878 /apis/apps/v1/namespaces/deployment-4878/replicasets/webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 6800093 3 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2b6f33a0-cf90-412f-84b3-450d9926f173 0xc003213ec7 0xc003213ec8}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b6f33a0-cf90-412f-84b3-450d9926f173\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003213f48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:12:23.271: INFO: Pod "webserver-deployment-795d758f88-2zpps" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2zpps webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-2zpps 2e0b29f1-af5f-4bfd-960e-9c7ae8dfaff9 6800060 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc0014805b7 0xc0014805b8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.271: INFO: Pod "webserver-deployment-795d758f88-4grmv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4grmv webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-4grmv f662b5e3-c544-40d5-a44b-fce969327011 6800087 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc0014806f7 0xc0014806f8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.272: INFO: Pod "webserver-deployment-795d758f88-4hfpl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4hfpl webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-4hfpl 4dff09ff-94d5-4434-a880-a391dd4d1d6b 6799975 0 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc001480c97 0xc001480c98}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-04 13:12:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.272: INFO: Pod "webserver-deployment-795d758f88-9rn44" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9rn44 webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-9rn44 45b33126-967a-4ef8-823d-a18320465234 6800025 0 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc001480f37 0xc001480f38}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.145\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.145,StartTime:2020-09-04 13:12:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.145,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.272: INFO: Pod "webserver-deployment-795d758f88-c8h6q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c8h6q webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-c8h6q eab39c87-4e22-48c5-a1d9-5209e0e42fb9 6800098 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc001481187 0xc001481188}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-04 13:12:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.272: INFO: Pod "webserver-deployment-795d758f88-hdqlq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hdqlq webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-hdqlq 0741efd0-5fa1-4722-ad94-30c05c54a6d8 6800086 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffc4c7 0xc002ffc4c8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.273: INFO: Pod "webserver-deployment-795d758f88-jhjnz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jhjnz webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-jhjnz 26d68e77-e388-48d0-a83b-df7759388abf 6800106 0 2020-09-04 13:12:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffc6b7 0xc002ffc6b8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.273: INFO: Pod "webserver-deployment-795d758f88-lrqrv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lrqrv webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-lrqrv 4ae81d59-07f2-4b28-bd37-d00bebecf944 6800091 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffca47 0xc002ffca48}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.273: INFO: Pod "webserver-deployment-795d758f88-p27kx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p27kx webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-p27kx 7abea7d3-ecb3-4261-bfd4-33757ea5d0d1 6800006 0 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffcc77 0xc002ffcc78}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.146,StartTime:2020-09-04 13:12:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.273: INFO: Pod "webserver-deployment-795d758f88-rdjjf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rdjjf webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-rdjjf 362cd6d7-4b9c-4378-8aa4-397a876d98f9 6800007 0 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffce67 0xc002ffce68}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.77,StartTime:2020-09-04 13:12:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.273: INFO: Pod "webserver-deployment-795d758f88-sd5vc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sd5vc webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-sd5vc 7fcc863f-2860-4a48-b80e-b58563dd5670 6800026 0 2020-09-04 13:12:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffd457 0xc002ffd458}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.79,StartTime:2020-09-04 13:12:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-795d758f88-w254k" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w254k webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-w254k 6a31e012-c7c0-44a1-9c7f-736209349993 6800059 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffd8c7 0xc002ffd8c8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-795d758f88-xnmhk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xnmhk webserver-deployment-795d758f88- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-795d758f88-xnmhk 727a75f8-1364-40f1-a0b8-a9163af851d6 6800082 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 2f26c3d2-51b5-4950-b3a9-16b623bcb3ca 0xc002ffdb37 0xc002ffdb38}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f26c3d2-51b5-4950-b3a9-16b623bcb3ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-dd94f59b7-2vfxq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2vfxq webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-2vfxq 2528390a-0118-42b7-b789-ae77b6414897 6800044 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ffdc77 0xc002ffdc78}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-dd94f59b7-7kctr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7kctr webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-7kctr d7198ac3-6cf3-4079-b35c-79b76bb69ca0 6800063 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ffde97 0xc002ffde98}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-dd94f59b7-84m84" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-84m84 webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-84m84 7f6574e2-a2d5-4ba6-89c3-738fce537975 6799888 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ffdfd7 0xc002ffdfd8}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.142,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://caf4622465b7d977e9cf66a5da6dca76efdccff437b42818359d2d7e88543b71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.274: INFO: Pod "webserver-deployment-dd94f59b7-8rfhw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8rfhw webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-8rfhw 8e56c1ae-be8e-4862-a9e0-cf0516786338 6799911 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac187 0xc002eac188}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.76,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0019ce19ef990073954e9ec2d75f95a61a78c68b7758b4493dedfbc2b331a093,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-bq7rl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bq7rl webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-bq7rl 1a4ac667-7ccf-482f-8fbb-21aa92152491 6800088 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac337 0xc002eac338}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-d8mvw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d8mvw webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-d8mvw 43d2c9c7-e0f9-40fb-808c-7074f81e656e 6799921 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac467 0xc002eac468}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.144,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://293fb2ba03b6e0e204a7633335cbd0cfb33d3bd9101233d9953465257d11fdd1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-h4r7g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h4r7g webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-h4r7g dc4ef2d2-d8a4-4b52-abbb-06d89934a5e2 6800089 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac627 0xc002eac628}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-hkq4f" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hkq4f webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-hkq4f 871933bc-dbfb-40bf-ac64-11d13645e94d 6800085 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac767 0xc002eac768}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-j4jgn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-j4jgn webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-j4jgn a8387cc9-1c56-4502-9eea-3cd2fc3c9de7 6800071 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac897 0xc002eac898}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.275: INFO: Pod "webserver-deployment-dd94f59b7-j9b95" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-j9b95 webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-j9b95 1590b871-17d5-45ec-89de-a6be20782a27 6800062 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eac9e7 0xc002eac9e8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-jkk2j" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jkk2j webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-jkk2j cf9cf669-b486-4c03-83c7-f493137cb071 6799871 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eacd07 0xc002eacd08}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.140,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://956dd2cb0107fcd9be90d4f5dbe51324539276447a7a3038d2dd3b075c9a4004,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-l7d99" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l7d99 webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-l7d99 889c1930-ccab-4fe6-ba95-10cf4c3fa980 6799905 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ead067 0xc002ead068}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.73,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed0f6459ff18a6e59413cf51cef3855f54583f36afaf758990b3ac670cb66124,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-l8r4g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l8r4g webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-l8r4g 8f61af9b-678b-4c9c-89e1-03ef0bbc22f8 6800058 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ead487 0xc002ead488}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-lvx6t" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lvx6t webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-lvx6t 67bf07b0-5a1e-440b-929d-9f36614bfdfd 6800090 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002ead777 0xc002ead778}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-q8t7d" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q8t7d webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-q8t7d f0006c2d-62f4-4833-bbd6-a15960da9bb8 6800112 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eada57 0xc002eada58}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-04 13:12:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.276: INFO: Pod "webserver-deployment-dd94f59b7-qdf4m" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qdf4m webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-qdf4m 13a9f092-37ca-4b4c-8b9c-e0cf78451a07 6800077 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc002eadda7 0xc002eadda8}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.277: INFO: Pod "webserver-deployment-dd94f59b7-sfl5w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sfl5w webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-sfl5w 4af947d8-b82a-47a8-baaf-c681f785c0b3 6800080 0 2020-09-04 13:12:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc003076017 0xc003076018}] [] [{kube-controller-manager Update v1 2020-09-04 13:12:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-04 13:12:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.277: INFO: Pod "webserver-deployment-dd94f59b7-tb7fp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tb7fp webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-tb7fp f7bdaf66-720d-4067-a612-f2cbdb606174 6799914 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc0030763b7 0xc0030763b8}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.143,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ec59bd5703818910bef12646ade666407854f066d9aa9edd650ccf5a1ab1840,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.277: INFO: Pod "webserver-deployment-dd94f59b7-xv4qp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xv4qp webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-xv4qp 47ddc944-0601-4b60-a2f2-220a5606cab0 6799893 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc003076677 0xc003076678}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.141\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.141,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:12:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bdb741f01107d4b799813fe78c4e98c82d8c092b08db8e923fa6b9ec669449be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 13:12:23.277: INFO: Pod "webserver-deployment-dd94f59b7-z69pw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z69pw webserver-deployment-dd94f59b7- deployment-4878 /api/v1/namespaces/deployment-4878/pods/webserver-deployment-dd94f59b7-z69pw 9589b833-f6a8-43d3-804d-0ce5720333c3 6799857 0 2020-09-04 13:11:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 01550ba6-60c6-4401-b994-4d226f274732 0xc003076a47 0xc003076a48}] [] [{kube-controller-manager Update v1 2020-09-04 13:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01550ba6-60c6-4401-b994-4d226f274732\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:12:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n7jm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n7jm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n7jm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:12:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:11:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.72,StartTime:2020-09-04 13:11:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:11:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dc5b9a1141ff1a894c8dc9edb85039b9ed4d5a4d070b171f71492c37de43e125,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:12:23.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4878" for this suite. • [SLOW TEST:30.042 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":31,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:12:23.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 4 13:12:23.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3838' Sep 4 13:12:24.081: INFO: stderr: "" Sep 4 13:12:24.081: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 4 13:12:24.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:24.366: INFO: stderr: "" Sep 4 13:12:24.366: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:24.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:24.468: INFO: stderr: "" Sep 4 13:12:24.468: INFO: stdout: "" Sep 4 13:12:24.468: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:12:29.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:29.583: INFO: stderr: "" Sep 4 13:12:29.583: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:29.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:29.799: INFO: stderr: "" Sep 4 13:12:29.799: INFO: stdout: "" Sep 4 13:12:29.799: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:12:34.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:35.569: INFO: stderr: "" Sep 4 13:12:35.569: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:35.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:39.748: INFO: stderr: "" Sep 4 13:12:39.748: INFO: stdout: "" Sep 4 13:12:39.748: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:12:44.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:46.268: INFO: stderr: "" Sep 4 13:12:46.268: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:46.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:47.189: INFO: stderr: "" Sep 4 13:12:47.189: INFO: stdout: "" Sep 4 13:12:47.189: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:12:52.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:52.778: INFO: stderr: "" Sep 4 13:12:52.778: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:52.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:53.128: INFO: stderr: "" Sep 4 13:12:53.128: INFO: stdout: "" Sep 4 13:12:53.128: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:12:58.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:12:58.370: INFO: stderr: "" Sep 4 13:12:58.370: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:12:58.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:12:59.140: INFO: stderr: "" Sep 4 13:12:59.140: INFO: stdout: "" Sep 4 13:12:59.140: INFO: update-demo-nautilus-n9s5v is created but not running Sep 4 13:13:04.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:04.368: INFO: stderr: "" Sep 4 13:13:04.368: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " Sep 4 13:13:04.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:04.753: INFO: stderr: "" Sep 4 13:13:04.753: INFO: stdout: "true" Sep 4 13:13:04.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:04.933: INFO: stderr: "" Sep 4 13:13:04.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 13:13:04.933: INFO: validating pod update-demo-nautilus-n9s5v Sep 4 13:13:04.966: INFO: got data: { "image": "nautilus.jpg" } Sep 4 13:13:04.966: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 13:13:04.966: INFO: update-demo-nautilus-n9s5v is verified up and running Sep 4 13:13:04.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqnz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:05.288: INFO: stderr: "" Sep 4 13:13:05.288: INFO: stdout: "true" Sep 4 13:13:05.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqnz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:05.427: INFO: stderr: "" Sep 4 13:13:05.427: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 13:13:05.427: INFO: validating pod update-demo-nautilus-nqnz8 Sep 4 13:13:05.594: INFO: got data: { "image": "nautilus.jpg" } Sep 4 13:13:05.594: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 13:13:05.594: INFO: update-demo-nautilus-nqnz8 is verified up and running STEP: scaling down the replication controller Sep 4 13:13:05.601: INFO: scanned /root for discovery docs: Sep 4 13:13:05.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3838' Sep 4 13:13:07.217: INFO: stderr: "" Sep 4 13:13:07.217: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 4 13:13:07.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:07.364: INFO: stderr: "" Sep 4 13:13:07.364: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 4 13:13:12.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:12.858: INFO: stderr: "" Sep 4 13:13:12.858: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 4 13:13:17.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:18.007: INFO: stderr: "" Sep 4 13:13:18.007: INFO: stdout: "update-demo-nautilus-n9s5v update-demo-nautilus-nqnz8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 4 13:13:23.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:23.121: INFO: stderr: "" Sep 4 13:13:23.122: INFO: stdout: "update-demo-nautilus-n9s5v " Sep 4 13:13:23.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:23.217: INFO: stderr: "" Sep 4 13:13:23.217: INFO: stdout: "true" Sep 4 13:13:23.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:23.314: INFO: stderr: "" Sep 4 13:13:23.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 13:13:23.314: INFO: validating pod update-demo-nautilus-n9s5v Sep 4 13:13:23.317: INFO: got data: { "image": "nautilus.jpg" } Sep 4 13:13:23.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 13:13:23.317: INFO: update-demo-nautilus-n9s5v is verified up and running STEP: scaling up the replication controller Sep 4 13:13:23.319: INFO: scanned /root for discovery docs: Sep 4 13:13:23.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3838' Sep 4 13:13:24.506: INFO: stderr: "" Sep 4 13:13:24.506: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 4 13:13:24.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:24.604: INFO: stderr: "" Sep 4 13:13:24.604: INFO: stdout: "update-demo-nautilus-6m7qn update-demo-nautilus-n9s5v " Sep 4 13:13:24.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m7qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:24.713: INFO: stderr: "" Sep 4 13:13:24.713: INFO: stdout: "" Sep 4 13:13:24.713: INFO: update-demo-nautilus-6m7qn is created but not running Sep 4 13:13:29.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:29.832: INFO: stderr: "" Sep 4 13:13:29.832: INFO: stdout: "update-demo-nautilus-6m7qn update-demo-nautilus-n9s5v " Sep 4 13:13:29.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m7qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:29.933: INFO: stderr: "" Sep 4 13:13:29.933: INFO: stdout: "" Sep 4 13:13:29.933: INFO: update-demo-nautilus-6m7qn is created but not running Sep 4 13:13:34.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3838' Sep 4 13:13:35.053: INFO: stderr: "" Sep 4 13:13:35.053: INFO: stdout: "update-demo-nautilus-6m7qn update-demo-nautilus-n9s5v " Sep 4 13:13:35.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m7qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:35.166: INFO: stderr: "" Sep 4 13:13:35.166: INFO: stdout: "true" Sep 4 13:13:35.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m7qn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:35.272: INFO: stderr: "" Sep 4 13:13:35.272: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 13:13:35.272: INFO: validating pod update-demo-nautilus-6m7qn Sep 4 13:13:35.276: INFO: got data: { "image": "nautilus.jpg" } Sep 4 13:13:35.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 13:13:35.276: INFO: update-demo-nautilus-6m7qn is verified up and running Sep 4 13:13:35.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:35.375: INFO: stderr: "" Sep 4 13:13:35.375: INFO: stdout: "true" Sep 4 13:13:35.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9s5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3838' Sep 4 13:13:35.479: INFO: stderr: "" Sep 4 13:13:35.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 13:13:35.479: INFO: validating pod update-demo-nautilus-n9s5v Sep 4 13:13:35.482: INFO: got data: { "image": "nautilus.jpg" } Sep 4 13:13:35.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 13:13:35.482: INFO: update-demo-nautilus-n9s5v is verified up and running STEP: using delete to clean up resources Sep 4 13:13:35.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3838' Sep 4 13:13:35.592: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 13:13:35.592: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 4 13:13:35.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3838' Sep 4 13:13:35.696: INFO: stderr: "No resources found in kubectl-3838 namespace.\n" Sep 4 13:13:35.696: INFO: stdout: "" Sep 4 13:13:35.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3838 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 4 13:13:35.800: INFO: stderr: "" Sep 4 13:13:35.800: INFO: stdout: "update-demo-nautilus-6m7qn\nupdate-demo-nautilus-n9s5v\n" Sep 4 13:13:36.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3838' Sep 4 13:13:36.399: INFO: stderr: "No resources found in kubectl-3838 namespace.\n" Sep 4 13:13:36.399: INFO: stdout: "" Sep 4 13:13:36.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3838 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 4 13:13:36.514: INFO: stderr: "" Sep 4 13:13:36.514: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:13:36.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3838" for this suite. • [SLOW TEST:73.083 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":32,"skipped":467,"failed":0} SS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:13:36.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:13:41.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3927" for this suite. • [SLOW TEST:5.453 seconds] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/podtemplates.go:41 should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":33,"skipped":469,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:13:41.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:13:43.109: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb" in namespace "security-context-test-2003" to be "Succeeded or Failed" Sep 4 13:13:43.203: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Pending", Reason="", readiness=false. Elapsed: 93.983589ms Sep 4 13:13:45.207: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097774674s Sep 4 13:13:47.570: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.461470049s Sep 4 13:13:50.202: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.093647704s Sep 4 13:13:52.518: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 9.409587378s Sep 4 13:13:54.522: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 11.413511039s Sep 4 13:13:57.946: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 14.837295693s Sep 4 13:13:59.950: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 16.840740186s Sep 4 13:14:02.803: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 19.69430856s Sep 4 13:14:04.806: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 21.697145717s Sep 4 13:14:07.430: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Running", Reason="", readiness=true. Elapsed: 24.321098614s Sep 4 13:14:09.433: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.324474742s Sep 4 13:14:09.433: INFO: Pod "busybox-user-65534-f6c81790-54a0-48c6-bdcc-b829e7c207eb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:09.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2003" for this suite. • [SLOW TEST:27.517 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":482,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:09.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 4 13:14:11.751: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 4 13:14:13.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822051, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822051, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822051, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822051, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:14:16.800: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:14:16.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:17.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1112" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.598 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":35,"skipped":491,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:18.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:14:18.275: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:19.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3566" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":36,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:19.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:14:19.535: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:25.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-811" for this suite. • [SLOW TEST:6.586 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":37,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:25.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2005" for this suite. • [SLOW TEST:16.328 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":38,"skipped":579,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:42.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9135/configmap-test-b22dff42-16ba-404a-8a83-54a54f8132bf STEP: Creating a pod to test consume configMaps Sep 4 13:14:42.433: INFO: Waiting up to 5m0s for pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113" in namespace "configmap-9135" to be "Succeeded or Failed" Sep 4 13:14:42.486: INFO: Pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113": Phase="Pending", Reason="", readiness=false. Elapsed: 52.445818ms Sep 4 13:14:44.490: INFO: Pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0570042s Sep 4 13:14:46.495: INFO: Pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113": Phase="Running", Reason="", readiness=true. Elapsed: 4.061462522s Sep 4 13:14:48.499: INFO: Pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066053517s STEP: Saw pod success Sep 4 13:14:48.499: INFO: Pod "pod-configmaps-d38f991f-89af-4301-a909-a4142720c113" satisfied condition "Succeeded or Failed" Sep 4 13:14:48.502: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d38f991f-89af-4301-a909-a4142720c113 container env-test: STEP: delete the pod Sep 4 13:14:48.550: INFO: Waiting for pod pod-configmaps-d38f991f-89af-4301-a909-a4142720c113 to disappear Sep 4 13:14:48.566: INFO: Pod pod-configmaps-d38f991f-89af-4301-a909-a4142720c113 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:48.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9135" for this suite. • [SLOW TEST:6.307 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":580,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:48.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-46df3d13-1264-4fa7-bf3f-7776d054408d STEP: Creating a pod to test consume configMaps Sep 4 13:14:48.701: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7" in namespace "configmap-7693" to be "Succeeded or Failed" Sep 4 13:14:48.710: INFO: Pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.140963ms Sep 4 13:14:50.727: INFO: Pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026490363s Sep 4 13:14:52.752: INFO: Pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7": Phase="Running", Reason="", readiness=true. Elapsed: 4.051664419s Sep 4 13:14:54.756: INFO: Pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05564482s STEP: Saw pod success Sep 4 13:14:54.756: INFO: Pod "pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7" satisfied condition "Succeeded or Failed" Sep 4 13:14:54.760: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7 container configmap-volume-test: STEP: delete the pod Sep 4 13:14:54.810: INFO: Waiting for pod pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7 to disappear Sep 4 13:14:54.814: INFO: Pod pod-configmaps-7a8f2974-2d18-42c5-a58f-d2d911166da7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:54.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7693" for this suite. • [SLOW TEST:6.250 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":600,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:54.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:14:54.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config version' Sep 4 13:14:54.995: INFO: stderr: "" Sep 4 13:14:54.995: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.1-rc.0\", GitCommit:\"945f4d7267dedfa22337d3705c510f0e3612ace6\", GitTreeState:\"clean\", BuildDate:\"2020-08-26T14:49:55Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:14:54.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5450" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":41,"skipped":601,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:14:55.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-xhhd STEP: Creating a pod to test atomic-volume-subpath Sep 4 13:14:55.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xhhd" in namespace "subpath-4320" to be "Succeeded or Failed" Sep 4 13:14:55.181: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.056235ms Sep 4 13:14:57.250: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092215522s Sep 4 13:14:59.255: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096449252s Sep 4 13:15:01.259: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 6.100824299s Sep 4 13:15:03.264: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 8.105566296s Sep 4 13:15:05.267: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 10.109158472s Sep 4 13:15:07.272: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 12.113714856s Sep 4 13:15:09.277: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 14.118426272s Sep 4 13:15:11.281: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 16.123245216s Sep 4 13:15:13.286: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 18.127916264s Sep 4 13:15:15.291: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 20.132949771s Sep 4 13:15:17.296: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 22.137613076s Sep 4 13:15:19.320: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Running", Reason="", readiness=true. Elapsed: 24.162200497s Sep 4 13:15:21.336: INFO: Pod "pod-subpath-test-configmap-xhhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.177997531s STEP: Saw pod success Sep 4 13:15:21.336: INFO: Pod "pod-subpath-test-configmap-xhhd" satisfied condition "Succeeded or Failed" Sep 4 13:15:21.339: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-xhhd container test-container-subpath-configmap-xhhd: STEP: delete the pod Sep 4 13:15:21.386: INFO: Waiting for pod pod-subpath-test-configmap-xhhd to disappear Sep 4 13:15:21.415: INFO: Pod pod-subpath-test-configmap-xhhd no longer exists STEP: Deleting pod pod-subpath-test-configmap-xhhd Sep 4 13:15:21.415: INFO: Deleting pod "pod-subpath-test-configmap-xhhd" in namespace "subpath-4320" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:15:21.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4320" for this suite. • [SLOW TEST:26.422 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":42,"skipped":617,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:15:21.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:15:22.385: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:15:25.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822123, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:15:27.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822123, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822122, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:15:30.500: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:15:30.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:15:31.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1677" for this suite. STEP: Destroying namespace "webhook-1677-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.650 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":43,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:15:32.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9884 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 4 13:15:32.220: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 4 13:15:32.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:15:34.792: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:15:36.714: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:15:38.775: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:40.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:42.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:44.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:46.706: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:48.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:50.712: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:52.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:15:54.707: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 4 13:15:54.714: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 4 13:16:00.746: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.96:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9884 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:16:00.746: INFO: >>> kubeConfig: /root/.kube/config I0904 13:16:00.778509 7 log.go:181] (0xc000143a20) (0xc0010f8a00) Create stream I0904 13:16:00.778550 7 log.go:181] (0xc000143a20) (0xc0010f8a00) Stream added, broadcasting: 1 I0904 13:16:00.781751 7 log.go:181] (0xc000143a20) Reply frame received for 1 I0904 13:16:00.781806 7 log.go:181] (0xc000143a20) (0xc0016932c0) Create stream I0904 13:16:00.781859 7 log.go:181] (0xc000143a20) (0xc0016932c0) Stream added, broadcasting: 3 I0904 13:16:00.783803 7 log.go:181] (0xc000143a20) Reply frame received for 3 I0904 13:16:00.783852 7 log.go:181] (0xc000143a20) (0xc000d0ee60) Create stream I0904 13:16:00.783868 7 log.go:181] (0xc000143a20) (0xc000d0ee60) Stream added, broadcasting: 5 I0904 13:16:00.784976 7 log.go:181] (0xc000143a20) Reply frame received for 5 I0904 13:16:00.939481 7 log.go:181] (0xc000143a20) Data frame received for 5 I0904 13:16:00.939515 7 log.go:181] (0xc000d0ee60) (5) Data frame handling I0904 13:16:00.939544 7 log.go:181] (0xc000143a20) Data frame received for 3 I0904 13:16:00.939577 7 log.go:181] (0xc0016932c0) (3) Data frame handling I0904 13:16:00.939616 7 log.go:181] (0xc0016932c0) (3) Data frame sent I0904 13:16:00.939637 7 log.go:181] (0xc000143a20) Data frame received for 3 I0904 13:16:00.939653 7 log.go:181] (0xc0016932c0) (3) Data frame handling I0904 13:16:00.941186 7 log.go:181] (0xc000143a20) Data frame received for 1 I0904 13:16:00.941213 7 log.go:181] (0xc0010f8a00) (1) Data frame handling I0904 13:16:00.941226 7 log.go:181] (0xc0010f8a00) (1) Data frame sent I0904 13:16:00.941247 7 log.go:181] (0xc000143a20) (0xc0010f8a00) Stream removed, broadcasting: 1 I0904 13:16:00.941290 7 log.go:181] (0xc000143a20) Go away received I0904 13:16:00.941611 7 log.go:181] (0xc000143a20) (0xc0010f8a00) Stream removed, broadcasting: 1 I0904 13:16:00.941627 7 log.go:181] (0xc000143a20) (0xc0016932c0) Stream removed, broadcasting: 3 I0904 13:16:00.941634 7 log.go:181] (0xc000143a20) (0xc000d0ee60) Stream removed, broadcasting: 5 Sep 4 13:16:00.941: INFO: Found all expected endpoints: [netserver-0] Sep 4 13:16:00.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.164:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9884 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:16:00.947: INFO: >>> kubeConfig: /root/.kube/config I0904 13:16:00.977018 7 log.go:181] (0xc0029aea50) (0xc001693ae0) Create stream I0904 13:16:00.977059 7 log.go:181] (0xc0029aea50) (0xc001693ae0) Stream added, broadcasting: 1 I0904 13:16:00.979826 7 log.go:181] (0xc0029aea50) Reply frame received for 1 I0904 13:16:00.979866 7 log.go:181] (0xc0029aea50) (0xc001693c20) Create stream I0904 13:16:00.979882 7 log.go:181] (0xc0029aea50) (0xc001693c20) Stream added, broadcasting: 3 I0904 13:16:00.980868 7 log.go:181] (0xc0029aea50) Reply frame received for 3 I0904 13:16:00.980911 7 log.go:181] (0xc0029aea50) (0xc000d0f040) Create stream I0904 13:16:00.980925 7 log.go:181] (0xc0029aea50) (0xc000d0f040) Stream added, broadcasting: 5 I0904 13:16:00.981940 7 log.go:181] (0xc0029aea50) Reply frame received for 5 I0904 13:16:01.064637 7 log.go:181] (0xc0029aea50) Data frame received for 3 I0904 13:16:01.064681 7 log.go:181] (0xc001693c20) (3) Data frame handling I0904 13:16:01.064692 7 log.go:181] (0xc001693c20) (3) Data frame sent I0904 13:16:01.064699 7 log.go:181] (0xc0029aea50) Data frame received for 3 I0904 13:16:01.064820 7 log.go:181] (0xc001693c20) (3) Data frame handling I0904 13:16:01.064906 7 log.go:181] (0xc0029aea50) Data frame received for 5 I0904 13:16:01.064943 7 log.go:181] (0xc000d0f040) (5) Data frame handling I0904 13:16:01.066476 7 log.go:181] (0xc0029aea50) Data frame received for 1 I0904 13:16:01.066507 7 log.go:181] (0xc001693ae0) (1) Data frame handling I0904 13:16:01.066536 7 log.go:181] (0xc001693ae0) (1) Data frame sent I0904 13:16:01.066665 7 log.go:181] (0xc0029aea50) (0xc001693ae0) Stream removed, broadcasting: 1 I0904 13:16:01.066689 7 log.go:181] (0xc0029aea50) Go away received I0904 13:16:01.066803 7 log.go:181] (0xc0029aea50) (0xc001693ae0) Stream removed, broadcasting: 1 I0904 13:16:01.066838 7 log.go:181] (0xc0029aea50) (0xc001693c20) Stream removed, broadcasting: 3 I0904 13:16:01.066867 7 log.go:181] (0xc0029aea50) (0xc000d0f040) Stream removed, broadcasting: 5 Sep 4 13:16:01.066: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:16:01.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9884" for this suite. • [SLOW TEST:28.999 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":676,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:16:01.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 4 13:16:01.137: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 4 13:16:01.145: INFO: Waiting for terminating namespaces to be deleted... Sep 4 13:16:01.148: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 4 13:16:01.185: INFO: rally-c3cf80a8-zqemyawi-6fcd6c6446-gb8ff from c-rally-c3cf80a8-jodeb7v9 started at 2020-09-04 13:15:40 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container rally-c3cf80a8-zqemyawi ready: true, restart count 0 Sep 4 13:16:01.185: INFO: rally-c3cf80a8-zqemyawi-76fc6448f-bwrd6 from c-rally-c3cf80a8-jodeb7v9 started at 2020-09-04 13:15:45 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container rally-c3cf80a8-zqemyawi ready: true, restart count 0 Sep 4 13:16:01.185: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container app ready: true, restart count 0 Sep 4 13:16:01.185: INFO: daemon-set-ff4l6 from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container app ready: true, restart count 0 Sep 4 13:16:01.185: INFO: live6 from default started at 2020-08-30 11:51:51 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container live6 ready: false, restart count 0 Sep 4 13:16:01.185: INFO: test-recreate-deployment-f79dd4667-n4rtn from deployment-6445 started at 2020-08-28 02:33:33 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container httpd ready: true, restart count 0 Sep 4 13:16:01.185: INFO: bono-7b5b98574f-j2wlq from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.185: INFO: Container bono ready: true, restart count 0 Sep 4 13:16:01.185: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.185: INFO: chronos-678bcff97d-665n9 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.185: INFO: Container chronos ready: true, restart count 0 Sep 4 13:16:01.185: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.185: INFO: homer-6d85c54796-5grhn from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container homer ready: true, restart count 0 Sep 4 13:16:01.185: INFO: homestead-prov-54ddb995c5-phmgj from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container homestead-prov ready: true, restart count 0 Sep 4 13:16:01.185: INFO: live-test from ims-fqddr started at 2020-08-30 10:33:20 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container live-test ready: false, restart count 0 Sep 4 13:16:01.185: INFO: ralf-645db98795-l7gpf from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.185: INFO: Container ralf ready: true, restart count 0 Sep 4 13:16:01.185: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.185: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:16:01.185: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:16:01.185: INFO: netserver-0 from pod-network-test-9884 started at 2020-09-04 13:15:32 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.185: INFO: Container webserver ready: true, restart count 0 Sep 4 13:16:01.185: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 4 13:16:01.193: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.193: INFO: Container app ready: true, restart count 0 Sep 4 13:16:01.193: INFO: daemon-set-6qbhl from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.193: INFO: Container app ready: true, restart count 0 Sep 4 13:16:01.193: INFO: live3 from default started at 2020-08-30 11:14:22 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.193: INFO: Container live3 ready: false, restart count 0 Sep 4 13:16:01.193: INFO: live4 from default started at 2020-08-30 11:19:29 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container live4 ready: false, restart count 0 Sep 4 13:16:01.194: INFO: live5 from default started at 2020-08-30 11:22:52 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container live5 ready: false, restart count 0 Sep 4 13:16:01.194: INFO: astaire-66c5667484-7s6hd from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.194: INFO: Container astaire ready: true, restart count 0 Sep 4 13:16:01.194: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.194: INFO: cassandra-bf5b4886d-w9qkb from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container cassandra ready: true, restart count 0 Sep 4 13:16:01.194: INFO: ellis-668f49999b-84cll from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container ellis ready: true, restart count 0 Sep 4 13:16:01.194: INFO: etcd-744b4d9f98-5bm8d from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container etcd ready: true, restart count 0 Sep 4 13:16:01.194: INFO: homestead-59959889bd-dh787 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.194: INFO: Container homestead ready: true, restart count 0 Sep 4 13:16:01.194: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.194: INFO: sprout-b4bbc5c49-m9nqx from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:16:01.194: INFO: Container sprout ready: true, restart count 0 Sep 4 13:16:01.194: INFO: Container tailer ready: true, restart count 0 Sep 4 13:16:01.194: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:16:01.194: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:16:01.194: INFO: host-test-container-pod from pod-network-test-9884 started at 2020-09-04 13:15:54 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container agnhost ready: true, restart count 0 Sep 4 13:16:01.194: INFO: netserver-1 from pod-network-test-9884 started at 2020-09-04 13:15:32 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container webserver ready: true, restart count 0 Sep 4 13:16:01.194: INFO: test-container-pod from pod-network-test-9884 started at 2020-09-04 13:15:54 +0000 UTC (1 container statuses recorded) Sep 4 13:16:01.194: INFO: Container webserver ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cbe9e92f-89c9-48c1-936d-69e34d5dadc6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-cbe9e92f-89c9-48c1-936d-69e34d5dadc6 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-cbe9e92f-89c9-48c1-936d-69e34d5dadc6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:21:11.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8039" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.354 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":45,"skipped":682,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:21:11.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:21:11.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7" in namespace "downward-api-1" to be "Succeeded or Failed" Sep 4 13:21:11.636: INFO: Pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.659708ms Sep 4 13:21:13.662: INFO: Pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066228859s Sep 4 13:21:15.670: INFO: Pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.074117856s Sep 4 13:21:17.683: INFO: Pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087341852s STEP: Saw pod success Sep 4 13:21:17.683: INFO: Pod "downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7" satisfied condition "Succeeded or Failed" Sep 4 13:21:17.686: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7 container client-container: STEP: delete the pod Sep 4 13:21:17.775: INFO: Waiting for pod downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7 to disappear Sep 4 13:21:17.814: INFO: Pod downwardapi-volume-ae706035-e6c6-43b9-8182-e73f07cef9b7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:21:17.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1" for this suite. • [SLOW TEST:6.390 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":687,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:21:17.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:21:24.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7421" for this suite. • [SLOW TEST:7.124 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":47,"skipped":700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:21:24.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 4 13:21:25.053: INFO: Waiting up to 5m0s for pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4" in namespace "emptydir-9966" to be "Succeeded or Failed" Sep 4 13:21:25.057: INFO: Pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308614ms Sep 4 13:21:27.255: INFO: Pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20204976s Sep 4 13:21:29.259: INFO: Pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.205862947s Sep 4 13:21:31.262: INFO: Pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209679696s STEP: Saw pod success Sep 4 13:21:31.263: INFO: Pod "pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4" satisfied condition "Succeeded or Failed" Sep 4 13:21:31.266: INFO: Trying to get logs from node latest-worker2 pod pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4 container test-container: STEP: delete the pod Sep 4 13:21:31.311: INFO: Waiting for pod pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4 to disappear Sep 4 13:21:31.326: INFO: Pod pod-0d7517e0-b865-406b-aeb9-85612fbaa1b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:21:31.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9966" for this suite. • [SLOW TEST:6.421 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":725,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:21:31.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4522 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 4 13:21:31.443: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 4 13:21:31.552: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:21:33.732: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:21:35.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:21:37.557: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:39.557: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:41.556: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:43.556: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:45.557: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:47.556: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:49.556: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:21:51.556: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 4 13:21:51.561: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:21:53.565: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 4 13:21:59.630: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.103 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4522 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:21:59.630: INFO: >>> kubeConfig: /root/.kube/config I0904 13:21:59.660390 7 log.go:181] (0xc0029ae0b0) (0xc00391fcc0) Create stream I0904 13:21:59.660434 7 log.go:181] (0xc0029ae0b0) (0xc00391fcc0) Stream added, broadcasting: 1 I0904 13:21:59.662784 7 log.go:181] (0xc0029ae0b0) Reply frame received for 1 I0904 13:21:59.662828 7 log.go:181] (0xc0029ae0b0) (0xc00176c780) Create stream I0904 13:21:59.662844 7 log.go:181] (0xc0029ae0b0) (0xc00176c780) Stream added, broadcasting: 3 I0904 13:21:59.663878 7 log.go:181] (0xc0029ae0b0) Reply frame received for 3 I0904 13:21:59.663911 7 log.go:181] (0xc0029ae0b0) (0xc002129c20) Create stream I0904 13:21:59.663924 7 log.go:181] (0xc0029ae0b0) (0xc002129c20) Stream added, broadcasting: 5 I0904 13:21:59.664971 7 log.go:181] (0xc0029ae0b0) Reply frame received for 5 I0904 13:22:00.764350 7 log.go:181] (0xc0029ae0b0) Data frame received for 3 I0904 13:22:00.764456 7 log.go:181] (0xc00176c780) (3) Data frame handling I0904 13:22:00.764544 7 log.go:181] (0xc00176c780) (3) Data frame sent I0904 13:22:00.764625 7 log.go:181] (0xc0029ae0b0) Data frame received for 3 I0904 13:22:00.764657 7 log.go:181] (0xc00176c780) (3) Data frame handling I0904 13:22:00.764691 7 log.go:181] (0xc0029ae0b0) Data frame received for 5 I0904 13:22:00.764935 7 log.go:181] (0xc002129c20) (5) Data frame handling I0904 13:22:00.767156 7 log.go:181] (0xc0029ae0b0) Data frame received for 1 I0904 13:22:00.767239 7 log.go:181] (0xc00391fcc0) (1) Data frame handling I0904 13:22:00.767297 7 log.go:181] (0xc00391fcc0) (1) Data frame sent I0904 13:22:00.767340 7 log.go:181] (0xc0029ae0b0) (0xc00391fcc0) Stream removed, broadcasting: 1 I0904 13:22:00.767373 7 log.go:181] (0xc0029ae0b0) Go away received I0904 13:22:00.767524 7 log.go:181] (0xc0029ae0b0) (0xc00391fcc0) Stream removed, broadcasting: 1 I0904 13:22:00.767550 7 log.go:181] (0xc0029ae0b0) (0xc00176c780) Stream removed, broadcasting: 3 I0904 13:22:00.767562 7 log.go:181] (0xc0029ae0b0) (0xc002129c20) Stream removed, broadcasting: 5 Sep 4 13:22:00.767: INFO: Found all expected endpoints: [netserver-0] Sep 4 13:22:00.770: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.177 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4522 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:22:00.770: INFO: >>> kubeConfig: /root/.kube/config I0904 13:22:00.825333 7 log.go:181] (0xc0001438c0) (0xc00176caa0) Create stream I0904 13:22:00.825366 7 log.go:181] (0xc0001438c0) (0xc00176caa0) Stream added, broadcasting: 1 I0904 13:22:00.827247 7 log.go:181] (0xc0001438c0) Reply frame received for 1 I0904 13:22:00.827300 7 log.go:181] (0xc0001438c0) (0xc00176cb40) Create stream I0904 13:22:00.827318 7 log.go:181] (0xc0001438c0) (0xc00176cb40) Stream added, broadcasting: 3 I0904 13:22:00.828186 7 log.go:181] (0xc0001438c0) Reply frame received for 3 I0904 13:22:00.828220 7 log.go:181] (0xc0001438c0) (0xc001692000) Create stream I0904 13:22:00.828231 7 log.go:181] (0xc0001438c0) (0xc001692000) Stream added, broadcasting: 5 I0904 13:22:00.829395 7 log.go:181] (0xc0001438c0) Reply frame received for 5 I0904 13:22:01.906213 7 log.go:181] (0xc0001438c0) Data frame received for 5 I0904 13:22:01.906255 7 log.go:181] (0xc001692000) (5) Data frame handling I0904 13:22:01.906290 7 log.go:181] (0xc0001438c0) Data frame received for 3 I0904 13:22:01.906306 7 log.go:181] (0xc00176cb40) (3) Data frame handling I0904 13:22:01.906323 7 log.go:181] (0xc00176cb40) (3) Data frame sent I0904 13:22:01.906340 7 log.go:181] (0xc0001438c0) Data frame received for 3 I0904 13:22:01.906355 7 log.go:181] (0xc00176cb40) (3) Data frame handling I0904 13:22:01.907857 7 log.go:181] (0xc0001438c0) Data frame received for 1 I0904 13:22:01.907882 7 log.go:181] (0xc00176caa0) (1) Data frame handling I0904 13:22:01.907901 7 log.go:181] (0xc00176caa0) (1) Data frame sent I0904 13:22:01.907921 7 log.go:181] (0xc0001438c0) (0xc00176caa0) Stream removed, broadcasting: 1 I0904 13:22:01.908011 7 log.go:181] (0xc0001438c0) (0xc00176caa0) Stream removed, broadcasting: 1 I0904 13:22:01.908025 7 log.go:181] (0xc0001438c0) (0xc00176cb40) Stream removed, broadcasting: 3 I0904 13:22:01.908032 7 log.go:181] (0xc0001438c0) (0xc001692000) Stream removed, broadcasting: 5 Sep 4 13:22:01.908: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:22:01.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0904 13:22:01.908446 7 log.go:181] (0xc0001438c0) Go away received STEP: Destroying namespace "pod-network-test-4522" for this suite. • [SLOW TEST:30.581 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":49,"skipped":741,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:22:01.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 4 13:22:01.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config cluster-info' Sep 4 13:22:05.386: INFO: stderr: "" Sep 4 13:22:05.387: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:22:05.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1955" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":50,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:22:05.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 4 13:22:05.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config api-versions' Sep 4 13:22:05.674: INFO: stderr: "" Sep 4 13:22:05.674: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:22:05.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9161" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":51,"skipped":770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:22:05.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-2901 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2901 STEP: Deleting pre-stop pod Sep 4 13:22:21.169: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:22:21.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2901" for this suite. • [SLOW TEST:15.545 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":52,"skipped":813,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:22:21.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-2852e05b-2515-42d7-a298-cfbdd50473b6 STEP: Creating secret with name s-test-opt-upd-ab2a2304-627a-46b0-9b9a-f975e4286cfd STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2852e05b-2515-42d7-a298-cfbdd50473b6 STEP: Updating secret s-test-opt-upd-ab2a2304-627a-46b0-9b9a-f975e4286cfd STEP: Creating secret with name s-test-opt-create-3e8397b6-c7cc-4662-ab64-e3354c027697 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:23:59.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3172" for this suite. • [SLOW TEST:97.945 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":827,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:23:59.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 4 13:23:59.245: INFO: Waiting up to 5m0s for pod "pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d" in namespace "emptydir-5239" to be "Succeeded or Failed" Sep 4 13:23:59.257: INFO: Pod "pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.314431ms Sep 4 13:24:01.307: INFO: Pod "pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062174345s Sep 4 13:24:03.316: INFO: Pod "pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070936463s STEP: Saw pod success Sep 4 13:24:03.316: INFO: Pod "pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d" satisfied condition "Succeeded or Failed" Sep 4 13:24:03.318: INFO: Trying to get logs from node latest-worker pod pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d container test-container: STEP: delete the pod Sep 4 13:24:03.388: INFO: Waiting for pod pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d to disappear Sep 4 13:24:03.402: INFO: Pod pod-8cc2fb01-fd0f-4b52-869c-2cd2d21af63d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:24:03.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5239" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":839,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:24:03.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:24:03.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48" in namespace "downward-api-1631" to be "Succeeded or Failed" Sep 4 13:24:03.936: INFO: Pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48": Phase="Pending", Reason="", readiness=false. Elapsed: 21.15152ms Sep 4 13:24:05.991: INFO: Pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076411637s Sep 4 13:24:07.995: INFO: Pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080266358s Sep 4 13:24:09.998: INFO: Pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083748422s STEP: Saw pod success Sep 4 13:24:09.998: INFO: Pod "downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48" satisfied condition "Succeeded or Failed" Sep 4 13:24:10.000: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48 container client-container: STEP: delete the pod Sep 4 13:24:10.034: INFO: Waiting for pod downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48 to disappear Sep 4 13:24:10.047: INFO: Pod downwardapi-volume-616e1e24-618e-4f91-8386-3e6222fd6f48 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:24:10.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1631" for this suite. • [SLOW TEST:6.645 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:24:10.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:24:10.199: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:24:12.206: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:24:14.204: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:16.202: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:18.203: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:20.202: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:22.204: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:24.203: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:26.202: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:28.202: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:30.203: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = false) Sep 4 13:24:32.203: INFO: The status of Pod test-webserver-c183a39b-d3f5-4043-b2d7-814b5857cd6c is Running (Ready = true) Sep 4 13:24:32.206: INFO: Container started at 2020-09-04 13:24:12 +0000 UTC, pod became ready at 2020-09-04 13:24:30 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:24:32.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6868" for this suite. • [SLOW TEST:22.158 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:24:32.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3951 Sep 4 13:24:36.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 4 13:24:36.599: INFO: stderr: "I0904 13:24:36.511519 1063 log.go:181] (0xc000d98c60) (0xc000d14640) Create stream\nI0904 13:24:36.511573 1063 log.go:181] (0xc000d98c60) (0xc000d14640) Stream added, broadcasting: 1\nI0904 13:24:36.514196 1063 log.go:181] (0xc000d98c60) Reply frame received for 1\nI0904 13:24:36.514240 1063 log.go:181] (0xc000d98c60) (0xc000d146e0) Create stream\nI0904 13:24:36.514250 1063 log.go:181] (0xc000d98c60) (0xc000d146e0) Stream added, broadcasting: 3\nI0904 13:24:36.515135 1063 log.go:181] (0xc000d98c60) Reply frame received for 3\nI0904 13:24:36.515167 1063 log.go:181] (0xc000d98c60) (0xc0005ac640) Create stream\nI0904 13:24:36.515180 1063 log.go:181] (0xc000d98c60) (0xc0005ac640) Stream added, broadcasting: 5\nI0904 13:24:36.516007 1063 log.go:181] (0xc000d98c60) Reply frame received for 5\nI0904 13:24:36.586710 1063 log.go:181] (0xc000d98c60) Data frame received for 5\nI0904 13:24:36.586746 1063 log.go:181] (0xc0005ac640) (5) Data frame handling\nI0904 13:24:36.586766 1063 log.go:181] (0xc0005ac640) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0904 13:24:36.589406 1063 log.go:181] (0xc000d98c60) Data frame received for 5\nI0904 13:24:36.589425 1063 log.go:181] (0xc0005ac640) (5) Data frame handling\nI0904 13:24:36.589459 1063 log.go:181] (0xc000d98c60) Data frame received for 3\nI0904 13:24:36.589494 1063 log.go:181] (0xc000d146e0) (3) Data frame handling\nI0904 13:24:36.589522 1063 log.go:181] (0xc000d146e0) (3) Data frame sent\nI0904 13:24:36.589577 1063 log.go:181] (0xc000d98c60) Data frame received for 3\nI0904 13:24:36.589602 1063 log.go:181] (0xc000d146e0) (3) Data frame handling\nI0904 13:24:36.591443 1063 log.go:181] (0xc000d98c60) Data frame received for 1\nI0904 13:24:36.591462 1063 log.go:181] (0xc000d14640) (1) Data frame handling\nI0904 13:24:36.591471 1063 log.go:181] (0xc000d14640) (1) Data frame sent\nI0904 13:24:36.591482 1063 log.go:181] (0xc000d98c60) (0xc000d14640) Stream removed, broadcasting: 1\nI0904 13:24:36.591495 1063 log.go:181] (0xc000d98c60) Go away received\nI0904 13:24:36.591809 1063 log.go:181] (0xc000d98c60) (0xc000d14640) Stream removed, broadcasting: 1\nI0904 13:24:36.591842 1063 log.go:181] (0xc000d98c60) (0xc000d146e0) Stream removed, broadcasting: 3\nI0904 13:24:36.591851 1063 log.go:181] (0xc000d98c60) (0xc0005ac640) Stream removed, broadcasting: 5\n" Sep 4 13:24:36.599: INFO: stdout: "iptables" Sep 4 13:24:36.599: INFO: proxyMode: iptables Sep 4 13:24:36.604: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:24:36.628: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:24:38.628: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:24:38.632: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:24:40.628: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:24:40.633: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3951 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3951 I0904 13:24:40.759897 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3951, replica count: 3 I0904 13:24:43.810359 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:24:46.810538 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:24:49.810752 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:24:49.821: INFO: Creating new exec pod Sep 4 13:24:56.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 4 13:24:57.067: INFO: stderr: "I0904 13:24:56.975148 1080 log.go:181] (0xc0008bb760) (0xc00072cb40) Create stream\nI0904 13:24:56.975194 1080 log.go:181] (0xc0008bb760) (0xc00072cb40) Stream added, broadcasting: 1\nI0904 13:24:56.982656 1080 log.go:181] (0xc0008bb760) Reply frame received for 1\nI0904 13:24:56.982699 1080 log.go:181] (0xc0008bb760) (0xc000c4a000) Create stream\nI0904 13:24:56.982710 1080 log.go:181] (0xc0008bb760) (0xc000c4a000) Stream added, broadcasting: 3\nI0904 13:24:56.983751 1080 log.go:181] (0xc0008bb760) Reply frame received for 3\nI0904 13:24:56.983867 1080 log.go:181] (0xc0008bb760) (0xc00072c000) Create stream\nI0904 13:24:56.983959 1080 log.go:181] (0xc0008bb760) (0xc00072c000) Stream added, broadcasting: 5\nI0904 13:24:56.984951 1080 log.go:181] (0xc0008bb760) Reply frame received for 5\nI0904 13:24:57.057578 1080 log.go:181] (0xc0008bb760) Data frame received for 5\nI0904 13:24:57.057610 1080 log.go:181] (0xc00072c000) (5) Data frame handling\nI0904 13:24:57.057626 1080 log.go:181] (0xc00072c000) (5) Data frame sent\nI0904 13:24:57.057634 1080 log.go:181] (0xc0008bb760) Data frame received for 5\nI0904 13:24:57.057641 1080 log.go:181] (0xc00072c000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0904 13:24:57.057659 1080 log.go:181] (0xc00072c000) (5) Data frame sent\nI0904 13:24:57.057827 1080 log.go:181] (0xc0008bb760) Data frame received for 5\nI0904 13:24:57.057858 1080 log.go:181] (0xc00072c000) (5) Data frame handling\nI0904 13:24:57.057878 1080 log.go:181] (0xc0008bb760) Data frame received for 3\nI0904 13:24:57.057889 1080 log.go:181] (0xc000c4a000) (3) Data frame handling\nI0904 13:24:57.059156 1080 log.go:181] (0xc0008bb760) Data frame received for 1\nI0904 13:24:57.059171 1080 log.go:181] (0xc00072cb40) (1) Data frame handling\nI0904 13:24:57.059185 1080 log.go:181] (0xc00072cb40) (1) Data frame sent\nI0904 13:24:57.059195 1080 log.go:181] (0xc0008bb760) (0xc00072cb40) Stream removed, broadcasting: 1\nI0904 13:24:57.059213 1080 log.go:181] (0xc0008bb760) Go away received\nI0904 13:24:57.059563 1080 log.go:181] (0xc0008bb760) (0xc00072cb40) Stream removed, broadcasting: 1\nI0904 13:24:57.059583 1080 log.go:181] (0xc0008bb760) (0xc000c4a000) Stream removed, broadcasting: 3\nI0904 13:24:57.059592 1080 log.go:181] (0xc0008bb760) (0xc00072c000) Stream removed, broadcasting: 5\n" Sep 4 13:24:57.067: INFO: stdout: "" Sep 4 13:24:57.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c nc -zv -t -w 2 10.111.135.92 80' Sep 4 13:24:57.281: INFO: stderr: "I0904 13:24:57.207720 1098 log.go:181] (0xc0008aafd0) (0xc00052d400) Create stream\nI0904 13:24:57.207777 1098 log.go:181] (0xc0008aafd0) (0xc00052d400) Stream added, broadcasting: 1\nI0904 13:24:57.212069 1098 log.go:181] (0xc0008aafd0) Reply frame received for 1\nI0904 13:24:57.212103 1098 log.go:181] (0xc0008aafd0) (0xc0009f8460) Create stream\nI0904 13:24:57.212111 1098 log.go:181] (0xc0008aafd0) (0xc0009f8460) Stream added, broadcasting: 3\nI0904 13:24:57.213098 1098 log.go:181] (0xc0008aafd0) Reply frame received for 3\nI0904 13:24:57.213138 1098 log.go:181] (0xc0008aafd0) (0xc00031c1e0) Create stream\nI0904 13:24:57.213150 1098 log.go:181] (0xc0008aafd0) (0xc00031c1e0) Stream added, broadcasting: 5\nI0904 13:24:57.214066 1098 log.go:181] (0xc0008aafd0) Reply frame received for 5\nI0904 13:24:57.271623 1098 log.go:181] (0xc0008aafd0) Data frame received for 3\nI0904 13:24:57.271680 1098 log.go:181] (0xc0009f8460) (3) Data frame handling\nI0904 13:24:57.271709 1098 log.go:181] (0xc0008aafd0) Data frame received for 5\nI0904 13:24:57.271721 1098 log.go:181] (0xc00031c1e0) (5) Data frame handling\nI0904 13:24:57.271732 1098 log.go:181] (0xc00031c1e0) (5) Data frame sent\nI0904 13:24:57.271745 1098 log.go:181] (0xc0008aafd0) Data frame received for 5\nI0904 13:24:57.271754 1098 log.go:181] (0xc00031c1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.135.92 80\nConnection to 10.111.135.92 80 port [tcp/http] succeeded!\nI0904 13:24:57.273259 1098 log.go:181] (0xc0008aafd0) Data frame received for 1\nI0904 13:24:57.273281 1098 log.go:181] (0xc00052d400) (1) Data frame handling\nI0904 13:24:57.273293 1098 log.go:181] (0xc00052d400) (1) Data frame sent\nI0904 13:24:57.273307 1098 log.go:181] (0xc0008aafd0) (0xc00052d400) Stream removed, broadcasting: 1\nI0904 13:24:57.273339 1098 log.go:181] (0xc0008aafd0) Go away received\nI0904 13:24:57.273644 1098 log.go:181] (0xc0008aafd0) (0xc00052d400) Stream removed, broadcasting: 1\nI0904 13:24:57.273659 1098 log.go:181] (0xc0008aafd0) (0xc0009f8460) Stream removed, broadcasting: 3\nI0904 13:24:57.273666 1098 log.go:181] (0xc0008aafd0) (0xc00031c1e0) Stream removed, broadcasting: 5\n" Sep 4 13:24:57.281: INFO: stdout: "" Sep 4 13:24:57.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32329' Sep 4 13:24:57.508: INFO: stderr: "I0904 13:24:57.416855 1115 log.go:181] (0xc00017a000) (0xc000c40000) Create stream\nI0904 13:24:57.416915 1115 log.go:181] (0xc00017a000) (0xc000c40000) Stream added, broadcasting: 1\nI0904 13:24:57.418985 1115 log.go:181] (0xc00017a000) Reply frame received for 1\nI0904 13:24:57.419017 1115 log.go:181] (0xc00017a000) (0xc00030a000) Create stream\nI0904 13:24:57.419025 1115 log.go:181] (0xc00017a000) (0xc00030a000) Stream added, broadcasting: 3\nI0904 13:24:57.419862 1115 log.go:181] (0xc00017a000) Reply frame received for 3\nI0904 13:24:57.419889 1115 log.go:181] (0xc00017a000) (0xc000888000) Create stream\nI0904 13:24:57.419896 1115 log.go:181] (0xc00017a000) (0xc000888000) Stream added, broadcasting: 5\nI0904 13:24:57.420591 1115 log.go:181] (0xc00017a000) Reply frame received for 5\nI0904 13:24:57.492976 1115 log.go:181] (0xc00017a000) Data frame received for 3\nI0904 13:24:57.493026 1115 log.go:181] (0xc00030a000) (3) Data frame handling\nI0904 13:24:57.493054 1115 log.go:181] (0xc00017a000) Data frame received for 5\nI0904 13:24:57.493067 1115 log.go:181] (0xc000888000) (5) Data frame handling\nI0904 13:24:57.493081 1115 log.go:181] (0xc000888000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 32329\nConnection to 172.18.0.11 32329 port [tcp/32329] succeeded!\nI0904 13:24:57.493389 1115 log.go:181] (0xc00017a000) Data frame received for 5\nI0904 13:24:57.493423 1115 log.go:181] (0xc000888000) (5) Data frame handling\nI0904 13:24:57.494424 1115 log.go:181] (0xc00017a000) Data frame received for 1\nI0904 13:24:57.494452 1115 log.go:181] (0xc000c40000) (1) Data frame handling\nI0904 13:24:57.494464 1115 log.go:181] (0xc000c40000) (1) Data frame sent\nI0904 13:24:57.494475 1115 log.go:181] (0xc00017a000) (0xc000c40000) Stream removed, broadcasting: 1\nI0904 13:24:57.494491 1115 log.go:181] (0xc00017a000) Go away received\nI0904 13:24:57.494961 1115 log.go:181] (0xc00017a000) (0xc000c40000) Stream removed, broadcasting: 1\nI0904 13:24:57.494978 1115 log.go:181] (0xc00017a000) (0xc00030a000) Stream removed, broadcasting: 3\nI0904 13:24:57.494986 1115 log.go:181] (0xc00017a000) (0xc000888000) Stream removed, broadcasting: 5\n" Sep 4 13:24:57.508: INFO: stdout: "" Sep 4 13:24:57.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32329' Sep 4 13:24:57.750: INFO: stderr: "I0904 13:24:57.664973 1133 log.go:181] (0xc00003a0b0) (0xc000896000) Create stream\nI0904 13:24:57.665035 1133 log.go:181] (0xc00003a0b0) (0xc000896000) Stream added, broadcasting: 1\nI0904 13:24:57.667017 1133 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0904 13:24:57.667038 1133 log.go:181] (0xc00003a0b0) (0xc000d8a000) Create stream\nI0904 13:24:57.667046 1133 log.go:181] (0xc00003a0b0) (0xc000d8a000) Stream added, broadcasting: 3\nI0904 13:24:57.667767 1133 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0904 13:24:57.667813 1133 log.go:181] (0xc00003a0b0) (0xc000670000) Create stream\nI0904 13:24:57.667829 1133 log.go:181] (0xc00003a0b0) (0xc000670000) Stream added, broadcasting: 5\nI0904 13:24:57.668570 1133 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0904 13:24:57.736448 1133 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.736484 1133 log.go:181] (0xc000d8a000) (3) Data frame handling\nI0904 13:24:57.736510 1133 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.736519 1133 log.go:181] (0xc000670000) (5) Data frame handling\nI0904 13:24:57.736528 1133 log.go:181] (0xc000670000) (5) Data frame sent\nI0904 13:24:57.736536 1133 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.736548 1133 log.go:181] (0xc000670000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32329\nConnection to 172.18.0.14 32329 port [tcp/32329] succeeded!\nI0904 13:24:57.737902 1133 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0904 13:24:57.737924 1133 log.go:181] (0xc000896000) (1) Data frame handling\nI0904 13:24:57.737936 1133 log.go:181] (0xc000896000) (1) Data frame sent\nI0904 13:24:57.737949 1133 log.go:181] (0xc00003a0b0) (0xc000896000) Stream removed, broadcasting: 1\nI0904 13:24:57.738086 1133 log.go:181] (0xc00003a0b0) Go away received\nI0904 13:24:57.738210 1133 log.go:181] (0xc00003a0b0) (0xc000896000) Stream removed, broadcasting: 1\nI0904 13:24:57.738226 1133 log.go:181] (0xc00003a0b0) (0xc000d8a000) Stream removed, broadcasting: 3\nI0904 13:24:57.738232 1133 log.go:181] (0xc00003a0b0) (0xc000670000) Stream removed, broadcasting: 5\n" Sep 4 13:24:57.750: INFO: stdout: "" Sep 4 13:24:57.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:32329/ ; done' Sep 4 13:24:58.077: INFO: stderr: "I0904 13:24:57.878131 1152 log.go:181] (0xc00003a0b0) (0xc000cf0000) Create stream\nI0904 13:24:57.878203 1152 log.go:181] (0xc00003a0b0) (0xc000cf0000) Stream added, broadcasting: 1\nI0904 13:24:57.880568 1152 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0904 13:24:57.880626 1152 log.go:181] (0xc00003a0b0) (0xc000eb01e0) Create stream\nI0904 13:24:57.880646 1152 log.go:181] (0xc00003a0b0) (0xc000eb01e0) Stream added, broadcasting: 3\nI0904 13:24:57.882024 1152 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0904 13:24:57.882071 1152 log.go:181] (0xc00003a0b0) (0xc000d54be0) Create stream\nI0904 13:24:57.882101 1152 log.go:181] (0xc00003a0b0) (0xc000d54be0) Stream added, broadcasting: 5\nI0904 13:24:57.883205 1152 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0904 13:24:57.950283 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.950306 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.950317 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.952561 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.952576 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:57.952595 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:57.977651 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.977675 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.977695 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.977956 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.977992 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:57.978019 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:57.978056 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.978081 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.978117 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.983974 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.984007 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.984032 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.985184 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.985213 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.985270 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:57.985304 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:57.985324 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.985362 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.990677 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.990700 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.990718 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.991391 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.991403 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.991411 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.991424 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.991431 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:57.991436 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\nI0904 13:24:57.991440 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.991444 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:57.991453 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\nI0904 13:24:57.994956 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.994974 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.994999 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.995619 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:57.995634 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:57.995640 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:57.995650 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:57.995656 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:57.995663 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.000249 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.000277 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.000297 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.000546 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.000566 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.000575 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.000588 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.000594 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.000602 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.004978 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.005000 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.005011 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.009083 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.009102 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.009123 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.009148 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.009162 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.009177 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.013827 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.013844 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.013857 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.014372 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.014383 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.014389 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.014405 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.014425 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.014445 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.019143 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.019165 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.019182 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.019595 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.019614 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.019621 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.019631 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.019636 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.019641 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.023553 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.023576 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.023585 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.024158 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.024172 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.024179 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.024189 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.024194 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.024198 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.028921 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.028935 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.028946 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.029364 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.029378 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.029385 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.029398 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.029411 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.029418 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.033350 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.033373 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.033392 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.033980 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.033991 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.034003 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.034011 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.034017 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.034028 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.038479 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.038500 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.038517 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.038937 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.038963 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.038973 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.038984 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.038991 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.038998 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.045279 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.045307 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.045329 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.046018 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.046036 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.046051 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.046068 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.046082 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.046101 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.052651 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.052670 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.052689 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.053529 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.053555 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.053565 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.053580 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.053589 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.053598 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.060414 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.060429 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.060435 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.061534 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.061562 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.061575 1152 log.go:181] (0xc000d54be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.061592 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.061602 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.061612 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.066384 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.066418 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.066432 1152 log.go:181] (0xc000eb01e0) (3) Data frame sent\nI0904 13:24:58.067026 1152 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:24:58.067062 1152 log.go:181] (0xc000d54be0) (5) Data frame handling\nI0904 13:24:58.067418 1152 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:24:58.067446 1152 log.go:181] (0xc000eb01e0) (3) Data frame handling\nI0904 13:24:58.069096 1152 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0904 13:24:58.069128 1152 log.go:181] (0xc000cf0000) (1) Data frame handling\nI0904 13:24:58.069149 1152 log.go:181] (0xc000cf0000) (1) Data frame sent\nI0904 13:24:58.069180 1152 log.go:181] (0xc00003a0b0) (0xc000cf0000) Stream removed, broadcasting: 1\nI0904 13:24:58.069253 1152 log.go:181] (0xc00003a0b0) Go away received\nI0904 13:24:58.069740 1152 log.go:181] (0xc00003a0b0) (0xc000cf0000) Stream removed, broadcasting: 1\nI0904 13:24:58.069761 1152 log.go:181] (0xc00003a0b0) (0xc000eb01e0) Stream removed, broadcasting: 3\nI0904 13:24:58.069772 1152 log.go:181] (0xc00003a0b0) (0xc000d54be0) Stream removed, broadcasting: 5\n" Sep 4 13:24:58.079: INFO: stdout: "\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc\naffinity-nodeport-timeout-vj8hc" Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Received response from host: affinity-nodeport-timeout-vj8hc Sep 4 13:24:58.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:32329/' Sep 4 13:24:58.310: INFO: stderr: "I0904 13:24:58.225394 1170 log.go:181] (0xc00003ad10) (0xc000980460) Create stream\nI0904 13:24:58.225451 1170 log.go:181] (0xc00003ad10) (0xc000980460) Stream added, broadcasting: 1\nI0904 13:24:58.227997 1170 log.go:181] (0xc00003ad10) Reply frame received for 1\nI0904 13:24:58.228052 1170 log.go:181] (0xc00003ad10) (0xc000980500) Create stream\nI0904 13:24:58.228068 1170 log.go:181] (0xc00003ad10) (0xc000980500) Stream added, broadcasting: 3\nI0904 13:24:58.233718 1170 log.go:181] (0xc00003ad10) Reply frame received for 3\nI0904 13:24:58.233757 1170 log.go:181] (0xc00003ad10) (0xc00085e000) Create stream\nI0904 13:24:58.233767 1170 log.go:181] (0xc00003ad10) (0xc00085e000) Stream added, broadcasting: 5\nI0904 13:24:58.234631 1170 log.go:181] (0xc00003ad10) Reply frame received for 5\nI0904 13:24:58.297460 1170 log.go:181] (0xc00003ad10) Data frame received for 5\nI0904 13:24:58.297495 1170 log.go:181] (0xc00085e000) (5) Data frame handling\nI0904 13:24:58.297516 1170 log.go:181] (0xc00085e000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:24:58.301121 1170 log.go:181] (0xc00003ad10) Data frame received for 3\nI0904 13:24:58.301134 1170 log.go:181] (0xc000980500) (3) Data frame handling\nI0904 13:24:58.301151 1170 log.go:181] (0xc000980500) (3) Data frame sent\nI0904 13:24:58.301830 1170 log.go:181] (0xc00003ad10) Data frame received for 5\nI0904 13:24:58.301846 1170 log.go:181] (0xc00085e000) (5) Data frame handling\nI0904 13:24:58.301891 1170 log.go:181] (0xc00003ad10) Data frame received for 3\nI0904 13:24:58.301903 1170 log.go:181] (0xc000980500) (3) Data frame handling\nI0904 13:24:58.303012 1170 log.go:181] (0xc00003ad10) Data frame received for 1\nI0904 13:24:58.303031 1170 log.go:181] (0xc000980460) (1) Data frame handling\nI0904 13:24:58.303041 1170 log.go:181] (0xc000980460) (1) Data frame sent\nI0904 13:24:58.303051 1170 log.go:181] (0xc00003ad10) (0xc000980460) Stream removed, broadcasting: 1\nI0904 13:24:58.303064 1170 log.go:181] (0xc00003ad10) Go away received\nI0904 13:24:58.303431 1170 log.go:181] (0xc00003ad10) (0xc000980460) Stream removed, broadcasting: 1\nI0904 13:24:58.303443 1170 log.go:181] (0xc00003ad10) (0xc000980500) Stream removed, broadcasting: 3\nI0904 13:24:58.303449 1170 log.go:181] (0xc00003ad10) (0xc00085e000) Stream removed, broadcasting: 5\n" Sep 4 13:24:58.310: INFO: stdout: "affinity-nodeport-timeout-vj8hc" Sep 4 13:25:13.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3951 execpod-affinitynd9h9 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:32329/' Sep 4 13:25:13.521: INFO: stderr: "I0904 13:25:13.442824 1188 log.go:181] (0xc000c97550) (0xc000828500) Create stream\nI0904 13:25:13.442873 1188 log.go:181] (0xc000c97550) (0xc000828500) Stream added, broadcasting: 1\nI0904 13:25:13.445357 1188 log.go:181] (0xc000c97550) Reply frame received for 1\nI0904 13:25:13.445392 1188 log.go:181] (0xc000c97550) (0xc000f0c1e0) Create stream\nI0904 13:25:13.445415 1188 log.go:181] (0xc000c97550) (0xc000f0c1e0) Stream added, broadcasting: 3\nI0904 13:25:13.446182 1188 log.go:181] (0xc000c97550) Reply frame received for 3\nI0904 13:25:13.446203 1188 log.go:181] (0xc000c97550) (0xc0009226e0) Create stream\nI0904 13:25:13.446210 1188 log.go:181] (0xc000c97550) (0xc0009226e0) Stream added, broadcasting: 5\nI0904 13:25:13.446975 1188 log.go:181] (0xc000c97550) Reply frame received for 5\nI0904 13:25:13.509897 1188 log.go:181] (0xc000c97550) Data frame received for 5\nI0904 13:25:13.509916 1188 log.go:181] (0xc0009226e0) (5) Data frame handling\nI0904 13:25:13.509930 1188 log.go:181] (0xc0009226e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32329/\nI0904 13:25:13.513399 1188 log.go:181] (0xc000c97550) Data frame received for 3\nI0904 13:25:13.513409 1188 log.go:181] (0xc000f0c1e0) (3) Data frame handling\nI0904 13:25:13.513420 1188 log.go:181] (0xc000f0c1e0) (3) Data frame sent\nI0904 13:25:13.514190 1188 log.go:181] (0xc000c97550) Data frame received for 3\nI0904 13:25:13.514224 1188 log.go:181] (0xc000c97550) Data frame received for 5\nI0904 13:25:13.514241 1188 log.go:181] (0xc0009226e0) (5) Data frame handling\nI0904 13:25:13.514256 1188 log.go:181] (0xc000f0c1e0) (3) Data frame handling\nI0904 13:25:13.515481 1188 log.go:181] (0xc000c97550) Data frame received for 1\nI0904 13:25:13.515509 1188 log.go:181] (0xc000828500) (1) Data frame handling\nI0904 13:25:13.515523 1188 log.go:181] (0xc000828500) (1) Data frame sent\nI0904 13:25:13.515534 1188 log.go:181] (0xc000c97550) (0xc000828500) Stream removed, broadcasting: 1\nI0904 13:25:13.515544 1188 log.go:181] (0xc000c97550) Go away received\nI0904 13:25:13.515927 1188 log.go:181] (0xc000c97550) (0xc000828500) Stream removed, broadcasting: 1\nI0904 13:25:13.515944 1188 log.go:181] (0xc000c97550) (0xc000f0c1e0) Stream removed, broadcasting: 3\nI0904 13:25:13.515953 1188 log.go:181] (0xc000c97550) (0xc0009226e0) Stream removed, broadcasting: 5\n" Sep 4 13:25:13.521: INFO: stdout: "affinity-nodeport-timeout-h5qqh" Sep 4 13:25:13.521: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3951, will wait for the garbage collector to delete the pods Sep 4 13:25:14.326: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 519.065253ms Sep 4 13:25:16.226: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 1.900195922s [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:25:40.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3951" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:67.953 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":57,"skipped":934,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:25:40.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:25:40.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6840" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":58,"skipped":943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:25:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9bcea054-7962-4fa8-acee-71e09139aee3 STEP: Creating a pod to test consume secrets Sep 4 13:25:40.371: INFO: Waiting up to 5m0s for pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5" in namespace "secrets-3904" to be "Succeeded or Failed" Sep 4 13:25:40.379: INFO: Pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.619288ms Sep 4 13:25:42.383: INFO: Pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011757819s Sep 4 13:25:44.387: INFO: Pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015362576s Sep 4 13:25:46.391: INFO: Pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019590025s STEP: Saw pod success Sep 4 13:25:46.391: INFO: Pod "pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5" satisfied condition "Succeeded or Failed" Sep 4 13:25:46.397: INFO: Trying to get logs from node latest-worker pod pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5 container secret-volume-test: STEP: delete the pod Sep 4 13:25:46.478: INFO: Waiting for pod pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5 to disappear Sep 4 13:25:46.480: INFO: Pod pod-secrets-79e8aaf0-740b-4dd8-a7f9-01463746bcf5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:25:46.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3904" for this suite. • [SLOW TEST:6.238 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":995,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:25:46.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:25:46.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c" in namespace "downward-api-9186" to be "Succeeded or Failed" Sep 4 13:25:46.555: INFO: Pod "downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299157ms Sep 4 13:25:48.559: INFO: Pod "downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007698183s Sep 4 13:25:50.562: INFO: Pod "downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011071536s STEP: Saw pod success Sep 4 13:25:50.562: INFO: Pod "downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c" satisfied condition "Succeeded or Failed" Sep 4 13:25:50.564: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c container client-container: STEP: delete the pod Sep 4 13:25:50.593: INFO: Waiting for pod downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c to disappear Sep 4 13:25:50.621: INFO: Pod downwardapi-volume-bf79055c-0dee-4b1b-bcb1-68169cb6c25c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:25:50.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9186" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":995,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:25:50.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 4 13:25:50.750: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804299 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:25:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:25:50.751: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804300 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:25:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:25:50.751: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804301 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:25:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 4 13:26:00.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804350 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:26:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:26:00.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804351 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:26:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:26:00.847: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9209 /api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-label-changed 4f625125-1a3f-4c4b-80b3-22c30a4311f9 6804353 0 2020-09-04 13:25:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-04 13:26:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:26:00.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9209" for this suite. • [SLOW TEST:10.230 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":61,"skipped":999,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:26:00.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:26:01.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:26:03.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822761, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822761, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822761, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734822761, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:26:06.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:26:06.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7101-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:26:07.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8899" for this suite. STEP: Destroying namespace "webhook-8899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.194 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":62,"skipped":999,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:26:08.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 4 13:26:08.242: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 4 13:26:08.272: INFO: Waiting for terminating namespaces to be deleted... Sep 4 13:26:08.274: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 4 13:26:08.281: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.281: INFO: Container app ready: true, restart count 0 Sep 4 13:26:08.281: INFO: daemon-set-ff4l6 from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container app ready: true, restart count 0 Sep 4 13:26:08.282: INFO: live6 from default started at 2020-08-30 11:51:51 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container live6 ready: false, restart count 0 Sep 4 13:26:08.282: INFO: test-recreate-deployment-f79dd4667-n4rtn from deployment-6445 started at 2020-08-28 02:33:33 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container httpd ready: true, restart count 0 Sep 4 13:26:08.282: INFO: bono-7b5b98574f-j2wlq from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.282: INFO: Container bono ready: true, restart count 0 Sep 4 13:26:08.282: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.282: INFO: chronos-678bcff97d-665n9 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.282: INFO: Container chronos ready: true, restart count 0 Sep 4 13:26:08.282: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.282: INFO: homer-6d85c54796-5grhn from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container homer ready: true, restart count 0 Sep 4 13:26:08.282: INFO: homestead-prov-54ddb995c5-phmgj from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container homestead-prov ready: true, restart count 0 Sep 4 13:26:08.282: INFO: live-test from ims-fqddr started at 2020-08-30 10:33:20 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container live-test ready: false, restart count 0 Sep 4 13:26:08.282: INFO: ralf-645db98795-l7gpf from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.282: INFO: Container ralf ready: true, restart count 0 Sep 4 13:26:08.282: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.282: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:26:08.282: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.282: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:26:08.282: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 4 13:26:08.339: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container app ready: true, restart count 0 Sep 4 13:26:08.339: INFO: daemon-set-6qbhl from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container app ready: true, restart count 0 Sep 4 13:26:08.339: INFO: live3 from default started at 2020-08-30 11:14:22 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container live3 ready: false, restart count 0 Sep 4 13:26:08.339: INFO: live4 from default started at 2020-08-30 11:19:29 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container live4 ready: false, restart count 0 Sep 4 13:26:08.339: INFO: live5 from default started at 2020-08-30 11:22:52 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container live5 ready: false, restart count 0 Sep 4 13:26:08.339: INFO: astaire-66c5667484-7s6hd from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.339: INFO: Container astaire ready: true, restart count 0 Sep 4 13:26:08.339: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.339: INFO: cassandra-bf5b4886d-w9qkb from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container cassandra ready: true, restart count 0 Sep 4 13:26:08.339: INFO: ellis-668f49999b-84cll from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container ellis ready: true, restart count 0 Sep 4 13:26:08.339: INFO: etcd-744b4d9f98-5bm8d from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container etcd ready: true, restart count 0 Sep 4 13:26:08.339: INFO: homestead-59959889bd-dh787 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.339: INFO: Container homestead ready: true, restart count 0 Sep 4 13:26:08.339: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.339: INFO: sprout-b4bbc5c49-m9nqx from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:26:08.339: INFO: Container sprout ready: true, restart count 0 Sep 4 13:26:08.339: INFO: Container tailer ready: true, restart count 0 Sep 4 13:26:08.339: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:26:08.339: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:26:08.339: INFO: sample-webhook-deployment-cbccbf6bb-s56td from webhook-8899 started at 2020-09-04 13:26:01 +0000 UTC (1 container statuses recorded) Sep 4 13:26:08.339: INFO: Container sample-webhook ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4b6ff45c-e892-4be1-a75a-53430fef641b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4b6ff45c-e892-4be1-a75a-53430fef641b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4b6ff45c-e892-4be1-a75a-53430fef641b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:26:24.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2269" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.633 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":63,"skipped":1010,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:26:24.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 4 13:26:25.663: INFO: starting watch STEP: patching STEP: updating Sep 4 13:26:25.723: INFO: waiting for watch events with expected annotations Sep 4 13:26:25.723: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:26:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4618" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":64,"skipped":1067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:26:25.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:26:26.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005" in namespace "projected-1859" to be "Succeeded or Failed" Sep 4 13:26:26.015: INFO: Pod "downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.163959ms Sep 4 13:26:28.098: INFO: Pod "downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091650497s Sep 4 13:26:30.121: INFO: Pod "downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115023806s STEP: Saw pod success Sep 4 13:26:30.121: INFO: Pod "downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005" satisfied condition "Succeeded or Failed" Sep 4 13:26:30.245: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005 container client-container: STEP: delete the pod Sep 4 13:26:30.465: INFO: Waiting for pod downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005 to disappear Sep 4 13:26:30.492: INFO: Pod downwardapi-volume-442b8227-a406-4870-b8e0-8fa92212b005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:26:30.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1859" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":1099,"failed":0} ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:26:30.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:28:30.632: INFO: Deleting pod "var-expansion-3d58c86f-a094-4484-9e69-2522fb6fdfda" in namespace "var-expansion-7962" Sep 4 13:28:30.636: INFO: Wait up to 5m0s for pod "var-expansion-3d58c86f-a094-4484-9e69-2522fb6fdfda" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:28:34.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7962" for this suite. • [SLOW TEST:124.144 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":66,"skipped":1099,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:28:34.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 4 13:30:35.258: INFO: Successfully updated pod "var-expansion-bd40fda5-bf2c-4110-949f-b0c8b22daa8a" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 4 13:30:39.283: INFO: Deleting pod "var-expansion-bd40fda5-bf2c-4110-949f-b0c8b22daa8a" in namespace "var-expansion-2993" Sep 4 13:30:39.287: INFO: Wait up to 5m0s for pod "var-expansion-bd40fda5-bf2c-4110-949f-b0c8b22daa8a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:13.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2993" for this suite. • [SLOW TEST:158.680 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":67,"skipped":1158,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:13.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 4 13:31:17.980: INFO: Successfully updated pod "annotationupdate4fa8218d-00da-47e8-89a7-abb750a1fb5a" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:20.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1932" for this suite. • [SLOW TEST:6.719 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":68,"skipped":1171,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:20.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 13:31:28.199: INFO: DNS probes using dns-820/dns-test-d9042fd8-3952-4bba-8713-12d63384c7ca succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:28.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-820" for this suite. • [SLOW TEST:8.270 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":69,"skipped":1185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:28.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:45.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7485" for this suite. • [SLOW TEST:17.625 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":70,"skipped":1215,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:45.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 4 13:31:46.031: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:54.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1364" for this suite. • [SLOW TEST:8.080 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":71,"skipped":1226,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:54.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 4 13:31:59.074: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3905 pod-service-account-4b3ef844-e9d9-486c-a38f-1d2d1a481909 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 4 13:31:59.317: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3905 pod-service-account-4b3ef844-e9d9-486c-a38f-1d2d1a481909 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 4 13:31:59.574: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3905 pod-service-account-4b3ef844-e9d9-486c-a38f-1d2d1a481909 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3905" for this suite. • [SLOW TEST:5.773 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":72,"skipped":1242,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:59.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:31:59.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-62" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":73,"skipped":1255,"failed":0} SSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:31:59.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:00.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8980" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":74,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:00.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-4d480f5a-6ff6-4f0b-adca-3dfda48e902b STEP: Creating a pod to test consume secrets Sep 4 13:32:00.348: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252" in namespace "projected-619" to be "Succeeded or Failed" Sep 4 13:32:00.352: INFO: Pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573352ms Sep 4 13:32:02.418: INFO: Pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069917665s Sep 4 13:32:04.548: INFO: Pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200309691s Sep 4 13:32:06.552: INFO: Pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204108113s STEP: Saw pod success Sep 4 13:32:06.552: INFO: Pod "pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252" satisfied condition "Succeeded or Failed" Sep 4 13:32:06.556: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252 container projected-secret-volume-test: STEP: delete the pod Sep 4 13:32:06.612: INFO: Waiting for pod pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252 to disappear Sep 4 13:32:06.632: INFO: Pod pod-projected-secrets-6390c7e5-62ed-43cc-b317-b33ff8044252 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:06.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-619" for this suite. • [SLOW TEST:6.433 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1279,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:06.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:32:10.834: INFO: Waiting up to 5m0s for pod "client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8" in namespace "pods-5621" to be "Succeeded or Failed" Sep 4 13:32:10.873: INFO: Pod "client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.446515ms Sep 4 13:32:12.957: INFO: Pod "client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122736709s Sep 4 13:32:14.961: INFO: Pod "client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126784976s STEP: Saw pod success Sep 4 13:32:14.961: INFO: Pod "client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8" satisfied condition "Succeeded or Failed" Sep 4 13:32:14.963: INFO: Trying to get logs from node latest-worker pod client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8 container env3cont: STEP: delete the pod Sep 4 13:32:15.148: INFO: Waiting for pod client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8 to disappear Sep 4 13:32:15.150: INFO: Pod client-envvars-dc4a63b9-de56-43fc-906a-310bc8287dc8 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:15.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5621" for this suite. • [SLOW TEST:8.526 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":76,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:15.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 4 13:32:15.289: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4147 /api/v1/namespaces/watch-4147/configmaps/e2e-watch-test-watch-closed ff77b8f7-7ae9-422c-89b2-776661455779 6805952 0 2020-09-04 13:32:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-04 13:32:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:32:15.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4147 /api/v1/namespaces/watch-4147/configmaps/e2e-watch-test-watch-closed ff77b8f7-7ae9-422c-89b2-776661455779 6805953 0 2020-09-04 13:32:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-04 13:32:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 4 13:32:15.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4147 /api/v1/namespaces/watch-4147/configmaps/e2e-watch-test-watch-closed ff77b8f7-7ae9-422c-89b2-776661455779 6805954 0 2020-09-04 13:32:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-04 13:32:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 13:32:15.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4147 /api/v1/namespaces/watch-4147/configmaps/e2e-watch-test-watch-closed ff77b8f7-7ae9-422c-89b2-776661455779 6805955 0 2020-09-04 13:32:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-04 13:32:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:15.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4147" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":77,"skipped":1318,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:15.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4469" for this suite. • [SLOW TEST:16.246 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":78,"skipped":1326,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:31.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 4 13:32:31.716: INFO: Waiting up to 5m0s for pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2" in namespace "var-expansion-4477" to be "Succeeded or Failed" Sep 4 13:32:31.722: INFO: Pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607656ms Sep 4 13:32:34.035: INFO: Pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319025518s Sep 4 13:32:36.041: INFO: Pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.32508504s Sep 4 13:32:38.045: INFO: Pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.32933941s STEP: Saw pod success Sep 4 13:32:38.045: INFO: Pod "var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2" satisfied condition "Succeeded or Failed" Sep 4 13:32:38.048: INFO: Trying to get logs from node latest-worker pod var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2 container dapi-container: STEP: delete the pod Sep 4 13:32:38.084: INFO: Waiting for pod var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2 to disappear Sep 4 13:32:38.101: INFO: Pod var-expansion-bdb9dda3-b537-45ee-87ed-66aa5dd59bf2 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4477" for this suite. • [SLOW TEST:6.531 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":79,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:38.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 4 13:32:42.720: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ad0b63d7-76fb-41e0-a91f-10f238bba6cc" Sep 4 13:32:42.720: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ad0b63d7-76fb-41e0-a91f-10f238bba6cc" in namespace "pods-5297" to be "terminated due to deadline exceeded" Sep 4 13:32:42.765: INFO: Pod "pod-update-activedeadlineseconds-ad0b63d7-76fb-41e0-a91f-10f238bba6cc": Phase="Running", Reason="", readiness=true. Elapsed: 45.05493ms Sep 4 13:32:44.907: INFO: Pod "pod-update-activedeadlineseconds-ad0b63d7-76fb-41e0-a91f-10f238bba6cc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.187284915s Sep 4 13:32:44.907: INFO: Pod "pod-update-activedeadlineseconds-ad0b63d7-76fb-41e0-a91f-10f238bba6cc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:32:44.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5297" for this suite. • [SLOW TEST:6.809 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:32:44.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7543 STEP: creating replication controller nodeport-test in namespace services-7543 I0904 13:32:45.216065 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7543, replica count: 2 I0904 13:32:48.266484 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:32:51.266640 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:32:51.266: INFO: Creating new exec pod Sep 4 13:32:56.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7543 execpodxxxzh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 4 13:33:00.011: INFO: stderr: "I0904 13:32:59.900344 1256 log.go:181] (0xc0005d6000) (0xc000c52000) Create stream\nI0904 13:32:59.900390 1256 log.go:181] (0xc0005d6000) (0xc000c52000) Stream added, broadcasting: 1\nI0904 13:32:59.908712 1256 log.go:181] (0xc0005d6000) Reply frame received for 1\nI0904 13:32:59.908927 1256 log.go:181] (0xc0005d6000) (0xc000954000) Create stream\nI0904 13:32:59.908956 1256 log.go:181] (0xc0005d6000) (0xc000954000) Stream added, broadcasting: 3\nI0904 13:32:59.910301 1256 log.go:181] (0xc0005d6000) Reply frame received for 3\nI0904 13:32:59.910347 1256 log.go:181] (0xc0005d6000) (0xc0009540a0) Create stream\nI0904 13:32:59.910363 1256 log.go:181] (0xc0005d6000) (0xc0009540a0) Stream added, broadcasting: 5\nI0904 13:32:59.912525 1256 log.go:181] (0xc0005d6000) Reply frame received for 5\nI0904 13:33:00.002752 1256 log.go:181] (0xc0005d6000) Data frame received for 5\nI0904 13:33:00.002794 1256 log.go:181] (0xc0009540a0) (5) Data frame handling\nI0904 13:33:00.002806 1256 log.go:181] (0xc0009540a0) (5) Data frame sent\nI0904 13:33:00.002819 1256 log.go:181] (0xc0005d6000) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0904 13:33:00.002833 1256 log.go:181] (0xc0005d6000) Data frame received for 3\nI0904 13:33:00.002849 1256 log.go:181] (0xc000954000) (3) Data frame handling\nI0904 13:33:00.002877 1256 log.go:181] (0xc0009540a0) (5) Data frame handling\nI0904 13:33:00.004448 1256 log.go:181] (0xc0005d6000) Data frame received for 1\nI0904 13:33:00.004473 1256 log.go:181] (0xc000c52000) (1) Data frame handling\nI0904 13:33:00.004497 1256 log.go:181] (0xc000c52000) (1) Data frame sent\nI0904 13:33:00.004529 1256 log.go:181] (0xc0005d6000) (0xc000c52000) Stream removed, broadcasting: 1\nI0904 13:33:00.004547 1256 log.go:181] (0xc0005d6000) Go away received\nI0904 13:33:00.005019 1256 log.go:181] (0xc0005d6000) (0xc000c52000) Stream removed, broadcasting: 1\nI0904 13:33:00.005042 1256 log.go:181] (0xc0005d6000) (0xc000954000) Stream removed, broadcasting: 3\nI0904 13:33:00.005056 1256 log.go:181] (0xc0005d6000) (0xc0009540a0) Stream removed, broadcasting: 5\n" Sep 4 13:33:00.011: INFO: stdout: "" Sep 4 13:33:00.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7543 execpodxxxzh -- /bin/sh -x -c nc -zv -t -w 2 10.97.174.205 80' Sep 4 13:33:00.212: INFO: stderr: "I0904 13:33:00.142022 1274 log.go:181] (0xc00065ed10) (0xc000f185a0) Create stream\nI0904 13:33:00.142097 1274 log.go:181] (0xc00065ed10) (0xc000f185a0) Stream added, broadcasting: 1\nI0904 13:33:00.147637 1274 log.go:181] (0xc00065ed10) Reply frame received for 1\nI0904 13:33:00.147693 1274 log.go:181] (0xc00065ed10) (0xc000d1a0a0) Create stream\nI0904 13:33:00.147714 1274 log.go:181] (0xc00065ed10) (0xc000d1a0a0) Stream added, broadcasting: 3\nI0904 13:33:00.148587 1274 log.go:181] (0xc00065ed10) Reply frame received for 3\nI0904 13:33:00.148629 1274 log.go:181] (0xc00065ed10) (0xc000f18000) Create stream\nI0904 13:33:00.148638 1274 log.go:181] (0xc00065ed10) (0xc000f18000) Stream added, broadcasting: 5\nI0904 13:33:00.149597 1274 log.go:181] (0xc00065ed10) Reply frame received for 5\nI0904 13:33:00.204586 1274 log.go:181] (0xc00065ed10) Data frame received for 3\nI0904 13:33:00.204623 1274 log.go:181] (0xc000d1a0a0) (3) Data frame handling\nI0904 13:33:00.204658 1274 log.go:181] (0xc00065ed10) Data frame received for 5\nI0904 13:33:00.204666 1274 log.go:181] (0xc000f18000) (5) Data frame handling\nI0904 13:33:00.204673 1274 log.go:181] (0xc000f18000) (5) Data frame sent\nI0904 13:33:00.204680 1274 log.go:181] (0xc00065ed10) Data frame received for 5\nI0904 13:33:00.204685 1274 log.go:181] (0xc000f18000) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.174.205 80\nConnection to 10.97.174.205 80 port [tcp/http] succeeded!\nI0904 13:33:00.206208 1274 log.go:181] (0xc00065ed10) Data frame received for 1\nI0904 13:33:00.206237 1274 log.go:181] (0xc000f185a0) (1) Data frame handling\nI0904 13:33:00.206251 1274 log.go:181] (0xc000f185a0) (1) Data frame sent\nI0904 13:33:00.206263 1274 log.go:181] (0xc00065ed10) (0xc000f185a0) Stream removed, broadcasting: 1\nI0904 13:33:00.206295 1274 log.go:181] (0xc00065ed10) Go away received\nI0904 13:33:00.206537 1274 log.go:181] (0xc00065ed10) (0xc000f185a0) Stream removed, broadcasting: 1\nI0904 13:33:00.206556 1274 log.go:181] (0xc00065ed10) (0xc000d1a0a0) Stream removed, broadcasting: 3\nI0904 13:33:00.206568 1274 log.go:181] (0xc00065ed10) (0xc000f18000) Stream removed, broadcasting: 5\n" Sep 4 13:33:00.213: INFO: stdout: "" Sep 4 13:33:00.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7543 execpodxxxzh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32452' Sep 4 13:33:00.436: INFO: stderr: "I0904 13:33:00.342030 1292 log.go:181] (0xc00027efd0) (0xc000d10500) Create stream\nI0904 13:33:00.342081 1292 log.go:181] (0xc00027efd0) (0xc000d10500) Stream added, broadcasting: 1\nI0904 13:33:00.344620 1292 log.go:181] (0xc00027efd0) Reply frame received for 1\nI0904 13:33:00.344672 1292 log.go:181] (0xc00027efd0) (0xc000c78000) Create stream\nI0904 13:33:00.344715 1292 log.go:181] (0xc00027efd0) (0xc000c78000) Stream added, broadcasting: 3\nI0904 13:33:00.345753 1292 log.go:181] (0xc00027efd0) Reply frame received for 3\nI0904 13:33:00.345809 1292 log.go:181] (0xc00027efd0) (0xc000c780a0) Create stream\nI0904 13:33:00.345831 1292 log.go:181] (0xc00027efd0) (0xc000c780a0) Stream added, broadcasting: 5\nI0904 13:33:00.346615 1292 log.go:181] (0xc00027efd0) Reply frame received for 5\nI0904 13:33:00.426260 1292 log.go:181] (0xc00027efd0) Data frame received for 3\nI0904 13:33:00.426303 1292 log.go:181] (0xc000c78000) (3) Data frame handling\nI0904 13:33:00.426322 1292 log.go:181] (0xc00027efd0) Data frame received for 5\nI0904 13:33:00.426328 1292 log.go:181] (0xc000c780a0) (5) Data frame handling\nI0904 13:33:00.426336 1292 log.go:181] (0xc000c780a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 32452\nConnection to 172.18.0.11 32452 port [tcp/32452] succeeded!\nI0904 13:33:00.426425 1292 log.go:181] (0xc00027efd0) Data frame received for 5\nI0904 13:33:00.426441 1292 log.go:181] (0xc000c780a0) (5) Data frame handling\nI0904 13:33:00.427965 1292 log.go:181] (0xc00027efd0) Data frame received for 1\nI0904 13:33:00.427984 1292 log.go:181] (0xc000d10500) (1) Data frame handling\nI0904 13:33:00.428005 1292 log.go:181] (0xc000d10500) (1) Data frame sent\nI0904 13:33:00.428020 1292 log.go:181] (0xc00027efd0) (0xc000d10500) Stream removed, broadcasting: 1\nI0904 13:33:00.428158 1292 log.go:181] (0xc00027efd0) Go away received\nI0904 13:33:00.428488 1292 log.go:181] (0xc00027efd0) (0xc000d10500) Stream removed, broadcasting: 1\nI0904 13:33:00.428506 1292 log.go:181] (0xc00027efd0) (0xc000c78000) Stream removed, broadcasting: 3\nI0904 13:33:00.428518 1292 log.go:181] (0xc00027efd0) (0xc000c780a0) Stream removed, broadcasting: 5\n" Sep 4 13:33:00.436: INFO: stdout: "" Sep 4 13:33:00.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7543 execpodxxxzh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32452' Sep 4 13:33:00.657: INFO: stderr: "I0904 13:33:00.573973 1309 log.go:181] (0xc000166fd0) (0xc0003da000) Create stream\nI0904 13:33:00.574044 1309 log.go:181] (0xc000166fd0) (0xc0003da000) Stream added, broadcasting: 1\nI0904 13:33:00.581158 1309 log.go:181] (0xc000166fd0) Reply frame received for 1\nI0904 13:33:00.581204 1309 log.go:181] (0xc000166fd0) (0xc000b8c0a0) Create stream\nI0904 13:33:00.581214 1309 log.go:181] (0xc000166fd0) (0xc000b8c0a0) Stream added, broadcasting: 3\nI0904 13:33:00.582315 1309 log.go:181] (0xc000166fd0) Reply frame received for 3\nI0904 13:33:00.582346 1309 log.go:181] (0xc000166fd0) (0xc000638000) Create stream\nI0904 13:33:00.582357 1309 log.go:181] (0xc000166fd0) (0xc000638000) Stream added, broadcasting: 5\nI0904 13:33:00.583417 1309 log.go:181] (0xc000166fd0) Reply frame received for 5\nI0904 13:33:00.647668 1309 log.go:181] (0xc000166fd0) Data frame received for 3\nI0904 13:33:00.647728 1309 log.go:181] (0xc000b8c0a0) (3) Data frame handling\nI0904 13:33:00.647762 1309 log.go:181] (0xc000166fd0) Data frame received for 5\nI0904 13:33:00.647778 1309 log.go:181] (0xc000638000) (5) Data frame handling\nI0904 13:33:00.647795 1309 log.go:181] (0xc000638000) (5) Data frame sent\nI0904 13:33:00.647811 1309 log.go:181] (0xc000166fd0) Data frame received for 5\nI0904 13:33:00.647827 1309 log.go:181] (0xc000638000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32452\nConnection to 172.18.0.14 32452 port [tcp/32452] succeeded!\nI0904 13:33:00.649108 1309 log.go:181] (0xc000166fd0) Data frame received for 1\nI0904 13:33:00.649137 1309 log.go:181] (0xc0003da000) (1) Data frame handling\nI0904 13:33:00.649150 1309 log.go:181] (0xc0003da000) (1) Data frame sent\nI0904 13:33:00.649165 1309 log.go:181] (0xc000166fd0) (0xc0003da000) Stream removed, broadcasting: 1\nI0904 13:33:00.649179 1309 log.go:181] (0xc000166fd0) Go away received\nI0904 13:33:00.649866 1309 log.go:181] (0xc000166fd0) (0xc0003da000) Stream removed, broadcasting: 1\nI0904 13:33:00.649898 1309 log.go:181] (0xc000166fd0) (0xc000b8c0a0) Stream removed, broadcasting: 3\nI0904 13:33:00.649915 1309 log.go:181] (0xc000166fd0) (0xc000638000) Stream removed, broadcasting: 5\n" Sep 4 13:33:00.657: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:33:00.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7543" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.769 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":81,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:33:00.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 4 13:33:00.750: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 4 13:33:00.778: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 4 13:33:00.778: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 4 13:33:00.838: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 4 13:33:00.838: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 4 13:33:00.917: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 4 13:33:00.917: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 4 13:33:08.531: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:33:08.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3630" for this suite. • [SLOW TEST:8.002 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":82,"skipped":1446,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:33:08.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753 Sep 4 13:33:08.866: INFO: Pod name my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753: Found 0 pods out of 1 Sep 4 13:33:13.892: INFO: Pod name my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753: Found 1 pods out of 1 Sep 4 13:33:13.892: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753" are running Sep 4 13:33:16.270: INFO: Pod "my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753-dmfwp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:33:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:33:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:33:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:33:08 +0000 UTC Reason: Message:}]) Sep 4 13:33:16.271: INFO: Trying to dial the pod Sep 4 13:33:21.283: INFO: Controller my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753: Got expected result from replica 1 [my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753-dmfwp]: "my-hostname-basic-5c4283bd-2f5c-44d7-95a7-8d7636629753-dmfwp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:33:21.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7270" for this suite. • [SLOW TEST:12.599 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":83,"skipped":1465,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:33:21.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 4 13:33:22.249: INFO: Pod name wrapped-volume-race-ce2ac152-94ac-4c36-91fb-aa1707064025: Found 0 pods out of 5 Sep 4 13:33:27.257: INFO: Pod name wrapped-volume-race-ce2ac152-94ac-4c36-91fb-aa1707064025: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ce2ac152-94ac-4c36-91fb-aa1707064025 in namespace emptydir-wrapper-7954, will wait for the garbage collector to delete the pods Sep 4 13:33:41.343: INFO: Deleting ReplicationController wrapped-volume-race-ce2ac152-94ac-4c36-91fb-aa1707064025 took: 9.308044ms Sep 4 13:33:41.843: INFO: Terminating ReplicationController wrapped-volume-race-ce2ac152-94ac-4c36-91fb-aa1707064025 pods took: 500.244566ms STEP: Creating RC which spawns configmap-volume pods Sep 4 13:33:59.798: INFO: Pod name wrapped-volume-race-71b8dafc-9919-435a-81e0-3f4a17ce3955: Found 0 pods out of 5 Sep 4 13:34:04.812: INFO: Pod name wrapped-volume-race-71b8dafc-9919-435a-81e0-3f4a17ce3955: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-71b8dafc-9919-435a-81e0-3f4a17ce3955 in namespace emptydir-wrapper-7954, will wait for the garbage collector to delete the pods Sep 4 13:34:22.955: INFO: Deleting ReplicationController wrapped-volume-race-71b8dafc-9919-435a-81e0-3f4a17ce3955 took: 45.673284ms Sep 4 13:34:23.456: INFO: Terminating ReplicationController wrapped-volume-race-71b8dafc-9919-435a-81e0-3f4a17ce3955 pods took: 500.187038ms STEP: Creating RC which spawns configmap-volume pods Sep 4 13:34:40.318: INFO: Pod name wrapped-volume-race-d4bcb136-04b1-4a53-96fd-7b901fbd5b0d: Found 0 pods out of 5 Sep 4 13:34:45.324: INFO: Pod name wrapped-volume-race-d4bcb136-04b1-4a53-96fd-7b901fbd5b0d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d4bcb136-04b1-4a53-96fd-7b901fbd5b0d in namespace emptydir-wrapper-7954, will wait for the garbage collector to delete the pods Sep 4 13:35:03.417: INFO: Deleting ReplicationController wrapped-volume-race-d4bcb136-04b1-4a53-96fd-7b901fbd5b0d took: 21.136973ms Sep 4 13:35:03.918: INFO: Terminating ReplicationController wrapped-volume-race-d4bcb136-04b1-4a53-96fd-7b901fbd5b0d pods took: 500.178185ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:35:21.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7954" for this suite. • [SLOW TEST:119.873 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":84,"skipped":1471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:35:21.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 4 13:35:27.765: INFO: Successfully updated pod "adopt-release-kz52r" STEP: Checking that the Job readopts the Pod Sep 4 13:35:27.765: INFO: Waiting up to 15m0s for pod "adopt-release-kz52r" in namespace "job-2865" to be "adopted" Sep 4 13:35:27.816: INFO: Pod "adopt-release-kz52r": Phase="Running", Reason="", readiness=true. Elapsed: 51.054898ms Sep 4 13:35:29.821: INFO: Pod "adopt-release-kz52r": Phase="Running", Reason="", readiness=true. Elapsed: 2.056012326s Sep 4 13:35:29.821: INFO: Pod "adopt-release-kz52r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 4 13:35:30.331: INFO: Successfully updated pod "adopt-release-kz52r" STEP: Checking that the Job releases the Pod Sep 4 13:35:30.331: INFO: Waiting up to 15m0s for pod "adopt-release-kz52r" in namespace "job-2865" to be "released" Sep 4 13:35:30.355: INFO: Pod "adopt-release-kz52r": Phase="Running", Reason="", readiness=true. Elapsed: 23.791007ms Sep 4 13:35:32.425: INFO: Pod "adopt-release-kz52r": Phase="Running", Reason="", readiness=true. Elapsed: 2.093393515s Sep 4 13:35:32.425: INFO: Pod "adopt-release-kz52r" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:35:32.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2865" for this suite. • [SLOW TEST:11.474 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":85,"skipped":1501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:35:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 4 13:35:33.095: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:35:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4859" for this suite. • [SLOW TEST:10.146 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":86,"skipped":1531,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:35:42.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:35:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4210" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":87,"skipped":1543,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:35:43.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 4 13:35:43.135: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:35:58.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4555" for this suite. • [SLOW TEST:14.954 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":88,"skipped":1543,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:35:58.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-2b039c94-e1bd-4336-95e7-fd261ed5a28a STEP: Creating secret with name s-test-opt-upd-5966004d-888b-4882-b5fa-c7da9bbabaee STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2b039c94-e1bd-4336-95e7-fd261ed5a28a STEP: Updating secret s-test-opt-upd-5966004d-888b-4882-b5fa-c7da9bbabaee STEP: Creating secret with name s-test-opt-create-18428db3-48e4-4fac-95a5-37b1940a07a6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:37:31.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4311" for this suite. • [SLOW TEST:92.978 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1550,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:37:31.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:37:31.083: INFO: Creating ReplicaSet my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee Sep 4 13:37:31.171: INFO: Pod name my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee: Found 0 pods out of 1 Sep 4 13:37:36.232: INFO: Pod name my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee: Found 1 pods out of 1 Sep 4 13:37:36.232: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee" is running Sep 4 13:37:36.246: INFO: Pod "my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee-4chfx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:37:31 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:37:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:37:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-04 13:37:31 +0000 UTC Reason: Message:}]) Sep 4 13:37:36.247: INFO: Trying to dial the pod Sep 4 13:37:41.260: INFO: Controller my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee: Got expected result from replica 1 [my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee-4chfx]: "my-hostname-basic-b4addfdb-ae60-4fe6-b782-232060328bee-4chfx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:37:41.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4750" for this suite. • [SLOW TEST:10.237 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":90,"skipped":1562,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:37:41.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:37:41.364: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:37:42.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1758" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":91,"skipped":1565,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:37:42.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:37:42.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5" in namespace "projected-1846" to be "Succeeded or Failed" Sep 4 13:37:42.695: INFO: Pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.421534ms Sep 4 13:37:44.800: INFO: Pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123885222s Sep 4 13:37:46.805: INFO: Pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.128338035s Sep 4 13:37:48.808: INFO: Pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131634778s STEP: Saw pod success Sep 4 13:37:48.808: INFO: Pod "downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5" satisfied condition "Succeeded or Failed" Sep 4 13:37:48.810: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5 container client-container: STEP: delete the pod Sep 4 13:37:49.155: INFO: Waiting for pod downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5 to disappear Sep 4 13:37:49.175: INFO: Pod downwardapi-volume-db3487eb-e892-42fd-b0a2-f5fa8445f9b5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:37:49.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1846" for this suite. • [SLOW TEST:6.595 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":92,"skipped":1576,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:37:49.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-62a0947f-c9ae-4036-ada9-af142c7229f7 in namespace container-probe-1154 Sep 4 13:37:53.310: INFO: Started pod busybox-62a0947f-c9ae-4036-ada9-af142c7229f7 in namespace container-probe-1154 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 13:37:53.313: INFO: Initial restart count of pod busybox-62a0947f-c9ae-4036-ada9-af142c7229f7 is 0 Sep 4 13:38:47.425: INFO: Restart count of pod container-probe-1154/busybox-62a0947f-c9ae-4036-ada9-af142c7229f7 is now 1 (54.112442595s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:38:47.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1154" for this suite. • [SLOW TEST:58.279 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1595,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:38:47.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 4 13:38:47.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-4340 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 4 13:38:47.918: INFO: stderr: "" Sep 4 13:38:47.918: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 4 13:38:47.918: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 4 13:38:47.918: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4340" to be "running and ready, or succeeded" Sep 4 13:38:47.961: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 42.839994ms Sep 4 13:38:49.964: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045853286s Sep 4 13:38:51.968: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049838213s Sep 4 13:38:53.972: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.054217573s Sep 4 13:38:53.973: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 4 13:38:53.973: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 4 13:38:53.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340' Sep 4 13:38:54.099: INFO: stderr: "" Sep 4 13:38:54.099: INFO: stdout: "I0904 13:38:51.375274 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/wf8 414\nI0904 13:38:51.575448 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/plgx 579\nI0904 13:38:51.775438 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/w6l 415\nI0904 13:38:51.975441 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/x42x 283\nI0904 13:38:52.175433 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4ndj 384\nI0904 13:38:52.375425 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/q27 352\nI0904 13:38:52.575479 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/4bh 440\nI0904 13:38:52.775448 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5jml 203\nI0904 13:38:52.975450 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/wh2g 498\nI0904 13:38:53.175436 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/smgp 526\nI0904 13:38:53.375493 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/ssg 576\nI0904 13:38:53.575329 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/vsc 430\nI0904 13:38:53.775469 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/ltth 470\nI0904 13:38:53.975447 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/rzh 348\n" STEP: limiting log lines Sep 4 13:38:54.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340 --tail=1' Sep 4 13:38:54.212: INFO: stderr: "" Sep 4 13:38:54.212: INFO: stdout: "I0904 13:38:54.175408 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/q4v 207\n" Sep 4 13:38:54.212: INFO: got output "I0904 13:38:54.175408 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/q4v 207\n" STEP: limiting log bytes Sep 4 13:38:54.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340 --limit-bytes=1' Sep 4 13:38:54.330: INFO: stderr: "" Sep 4 13:38:54.330: INFO: stdout: "I" Sep 4 13:38:54.330: INFO: got output "I" STEP: exposing timestamps Sep 4 13:38:54.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340 --tail=1 --timestamps' Sep 4 13:38:54.456: INFO: stderr: "" Sep 4 13:38:54.456: INFO: stdout: "2020-09-04T13:38:54.375956806Z I0904 13:38:54.375409 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/rck 328\n" Sep 4 13:38:54.456: INFO: got output "2020-09-04T13:38:54.375956806Z I0904 13:38:54.375409 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/rck 328\n" STEP: restricting to a time range Sep 4 13:38:56.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340 --since=1s' Sep 4 13:38:57.077: INFO: stderr: "" Sep 4 13:38:57.077: INFO: stdout: "I0904 13:38:56.175427 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/lf6 243\nI0904 13:38:56.375394 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/glx 538\nI0904 13:38:56.575422 1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/mwb 427\nI0904 13:38:56.775417 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/tvkh 569\nI0904 13:38:56.975429 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/sck 477\n" Sep 4 13:38:57.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4340 --since=24h' Sep 4 13:38:57.205: INFO: stderr: "" Sep 4 13:38:57.205: INFO: stdout: "I0904 13:38:51.375274 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/wf8 414\nI0904 13:38:51.575448 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/plgx 579\nI0904 13:38:51.775438 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/w6l 415\nI0904 13:38:51.975441 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/x42x 283\nI0904 13:38:52.175433 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/4ndj 384\nI0904 13:38:52.375425 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/q27 352\nI0904 13:38:52.575479 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/4bh 440\nI0904 13:38:52.775448 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/5jml 203\nI0904 13:38:52.975450 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/wh2g 498\nI0904 13:38:53.175436 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/smgp 526\nI0904 13:38:53.375493 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/ssg 576\nI0904 13:38:53.575329 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/vsc 430\nI0904 13:38:53.775469 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/ltth 470\nI0904 13:38:53.975447 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/rzh 348\nI0904 13:38:54.175408 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/q4v 207\nI0904 13:38:54.375409 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/rck 328\nI0904 13:38:54.575430 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/t5fk 413\nI0904 13:38:54.775460 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/fp8v 330\nI0904 13:38:54.975459 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/92m 437\nI0904 13:38:55.175423 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/zmn 346\nI0904 13:38:55.375400 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/m6kx 587\nI0904 13:38:55.575417 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/8mcr 231\nI0904 13:38:55.775417 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/vknd 389\nI0904 13:38:55.975434 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/gbpf 456\nI0904 13:38:56.175427 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/lf6 243\nI0904 13:38:56.375394 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/glx 538\nI0904 13:38:56.575422 1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/mwb 427\nI0904 13:38:56.775417 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/tvkh 569\nI0904 13:38:56.975429 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/sck 477\nI0904 13:38:57.175449 1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/rcg 334\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 4 13:38:57.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4340' Sep 4 13:39:00.317: INFO: stderr: "" Sep 4 13:39:00.317: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:39:00.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4340" for this suite. • [SLOW TEST:12.848 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":94,"skipped":1597,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:39:00.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:39:01.167: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:39:03.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:39:05.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823541, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:39:08.316: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:39:08.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3862" for this suite. STEP: Destroying namespace "webhook-3862-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.332 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":95,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:39:08.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 4 13:39:08.737: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 4 13:39:08.769: INFO: Waiting for terminating namespaces to be deleted... Sep 4 13:39:08.773: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 4 13:39:08.780: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.780: INFO: Container app ready: true, restart count 0 Sep 4 13:39:08.780: INFO: daemon-set-ff4l6 from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.780: INFO: Container app ready: true, restart count 0 Sep 4 13:39:08.780: INFO: live6 from default started at 2020-08-30 11:51:51 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.780: INFO: Container live6 ready: false, restart count 0 Sep 4 13:39:08.780: INFO: test-recreate-deployment-f79dd4667-n4rtn from deployment-6445 started at 2020-08-28 02:33:33 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.780: INFO: Container httpd ready: true, restart count 0 Sep 4 13:39:08.780: INFO: bono-7b5b98574f-j2wlq from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.780: INFO: Container bono ready: true, restart count 0 Sep 4 13:39:08.780: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.780: INFO: chronos-678bcff97d-665n9 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.780: INFO: Container chronos ready: true, restart count 0 Sep 4 13:39:08.780: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.781: INFO: homer-6d85c54796-5grhn from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.781: INFO: Container homer ready: true, restart count 0 Sep 4 13:39:08.781: INFO: homestead-prov-54ddb995c5-phmgj from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.781: INFO: Container homestead-prov ready: true, restart count 0 Sep 4 13:39:08.781: INFO: live-test from ims-fqddr started at 2020-08-30 10:33:20 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.781: INFO: Container live-test ready: false, restart count 0 Sep 4 13:39:08.781: INFO: ralf-645db98795-l7gpf from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.781: INFO: Container ralf ready: true, restart count 0 Sep 4 13:39:08.781: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.781: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.781: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:39:08.781: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.781: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:39:08.781: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 4 13:39:08.787: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container app ready: true, restart count 0 Sep 4 13:39:08.787: INFO: daemon-set-6qbhl from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container app ready: true, restart count 0 Sep 4 13:39:08.787: INFO: live3 from default started at 2020-08-30 11:14:22 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container live3 ready: false, restart count 0 Sep 4 13:39:08.787: INFO: live4 from default started at 2020-08-30 11:19:29 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container live4 ready: false, restart count 0 Sep 4 13:39:08.787: INFO: live5 from default started at 2020-08-30 11:22:52 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container live5 ready: false, restart count 0 Sep 4 13:39:08.787: INFO: astaire-66c5667484-7s6hd from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.787: INFO: Container astaire ready: true, restart count 0 Sep 4 13:39:08.787: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.787: INFO: cassandra-bf5b4886d-w9qkb from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container cassandra ready: true, restart count 0 Sep 4 13:39:08.787: INFO: ellis-668f49999b-84cll from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container ellis ready: true, restart count 0 Sep 4 13:39:08.787: INFO: etcd-744b4d9f98-5bm8d from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container etcd ready: true, restart count 0 Sep 4 13:39:08.787: INFO: homestead-59959889bd-dh787 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.787: INFO: Container homestead ready: true, restart count 0 Sep 4 13:39:08.787: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.787: INFO: sprout-b4bbc5c49-m9nqx from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:39:08.787: INFO: Container sprout ready: true, restart count 0 Sep 4 13:39:08.787: INFO: Container tailer ready: true, restart count 0 Sep 4 13:39:08.787: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:39:08.787: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:39:08.787: INFO: sample-webhook-deployment-cbccbf6bb-456dx from webhook-3862 started at 2020-09-04 13:39:01 +0000 UTC (1 container statuses recorded) Sep 4 13:39:08.787: INFO: Container sample-webhook ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16319841463b2fbf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16319841475f04b3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:39:09.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6030" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":96,"skipped":1643,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:39:09.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:39:09.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 4 13:39:10.527: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:10Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:10Z]] name:name1 resourceVersion:6808641 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0292c7de-b203-479c-a9a2-405c868dc28d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 4 13:39:20.535: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:20Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:20Z]] name:name2 resourceVersion:6808693 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406b8d4b-93b4-4d11-812e-6fb866281310] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 4 13:39:30.542: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:30Z]] name:name1 resourceVersion:6808723 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0292c7de-b203-479c-a9a2-405c868dc28d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 4 13:39:40.549: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:40Z]] name:name2 resourceVersion:6808751 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406b8d4b-93b4-4d11-812e-6fb866281310] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 4 13:39:50.558: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:30Z]] name:name1 resourceVersion:6808781 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0292c7de-b203-479c-a9a2-405c868dc28d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 4 13:40:00.567: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-04T13:39:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-04T13:39:40Z]] name:name2 resourceVersion:6808811 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:406b8d4b-93b4-4d11-812e-6fb866281310] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:40:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9547" for this suite. • [SLOW TEST:61.216 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":97,"skipped":1645,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:40:11.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1577 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-1577 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1577 Sep 4 13:40:11.203: INFO: Found 0 stateful pods, waiting for 1 Sep 4 13:40:21.208: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 4 13:40:21.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 13:40:21.502: INFO: stderr: "I0904 13:40:21.373556 1470 log.go:181] (0xc0008c1e40) (0xc000c1c5a0) Create stream\nI0904 13:40:21.373613 1470 log.go:181] (0xc0008c1e40) (0xc000c1c5a0) Stream added, broadcasting: 1\nI0904 13:40:21.379468 1470 log.go:181] (0xc0008c1e40) Reply frame received for 1\nI0904 13:40:21.379520 1470 log.go:181] (0xc0008c1e40) (0xc0007e4280) Create stream\nI0904 13:40:21.379536 1470 log.go:181] (0xc0008c1e40) (0xc0007e4280) Stream added, broadcasting: 3\nI0904 13:40:21.381381 1470 log.go:181] (0xc0008c1e40) Reply frame received for 3\nI0904 13:40:21.381421 1470 log.go:181] (0xc0008c1e40) (0xc000e1a320) Create stream\nI0904 13:40:21.381436 1470 log.go:181] (0xc0008c1e40) (0xc000e1a320) Stream added, broadcasting: 5\nI0904 13:40:21.381986 1470 log.go:181] (0xc0008c1e40) Reply frame received for 5\nI0904 13:40:21.452222 1470 log.go:181] (0xc0008c1e40) Data frame received for 5\nI0904 13:40:21.452253 1470 log.go:181] (0xc000e1a320) (5) Data frame handling\nI0904 13:40:21.452276 1470 log.go:181] (0xc000e1a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 13:40:21.486122 1470 log.go:181] (0xc0008c1e40) Data frame received for 3\nI0904 13:40:21.486274 1470 log.go:181] (0xc0007e4280) (3) Data frame handling\nI0904 13:40:21.486327 1470 log.go:181] (0xc0007e4280) (3) Data frame sent\nI0904 13:40:21.486381 1470 log.go:181] (0xc0008c1e40) Data frame received for 3\nI0904 13:40:21.486509 1470 log.go:181] (0xc0007e4280) (3) Data frame handling\nI0904 13:40:21.487464 1470 log.go:181] (0xc0008c1e40) Data frame received for 5\nI0904 13:40:21.487488 1470 log.go:181] (0xc000e1a320) (5) Data frame handling\nI0904 13:40:21.488856 1470 log.go:181] (0xc0008c1e40) Data frame received for 1\nI0904 13:40:21.488879 1470 log.go:181] (0xc000c1c5a0) (1) Data frame handling\nI0904 13:40:21.488893 1470 log.go:181] (0xc000c1c5a0) (1) Data frame sent\nI0904 13:40:21.488918 1470 log.go:181] (0xc0008c1e40) (0xc000c1c5a0) Stream removed, broadcasting: 1\nI0904 13:40:21.488955 1470 log.go:181] (0xc0008c1e40) Go away received\nI0904 13:40:21.489219 1470 log.go:181] (0xc0008c1e40) (0xc000c1c5a0) Stream removed, broadcasting: 1\nI0904 13:40:21.489231 1470 log.go:181] (0xc0008c1e40) (0xc0007e4280) Stream removed, broadcasting: 3\nI0904 13:40:21.489236 1470 log.go:181] (0xc0008c1e40) (0xc000e1a320) Stream removed, broadcasting: 5\n" Sep 4 13:40:21.502: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 13:40:21.502: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 13:40:21.507: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 4 13:40:21.507: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 13:40:21.525: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 4 13:40:31.541: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:31.541: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:31.541: INFO: Sep 4 13:40:31.541: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 4 13:40:32.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997187562s Sep 4 13:40:33.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993289712s Sep 4 13:40:34.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990042288s Sep 4 13:40:35.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974072889s Sep 4 13:40:36.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.702929153s Sep 4 13:40:37.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.591647716s Sep 4 13:40:38.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.571814964s Sep 4 13:40:39.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.567068484s Sep 4 13:40:40.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 561.940171ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1577 Sep 4 13:40:41.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 13:40:42.210: INFO: stderr: "I0904 13:40:42.140604 1488 log.go:181] (0xc00018c370) (0xc000a1e000) Create stream\nI0904 13:40:42.140695 1488 log.go:181] (0xc00018c370) (0xc000a1e000) Stream added, broadcasting: 1\nI0904 13:40:42.142874 1488 log.go:181] (0xc00018c370) Reply frame received for 1\nI0904 13:40:42.142897 1488 log.go:181] (0xc00018c370) (0xc0008e21e0) Create stream\nI0904 13:40:42.142912 1488 log.go:181] (0xc00018c370) (0xc0008e21e0) Stream added, broadcasting: 3\nI0904 13:40:42.143997 1488 log.go:181] (0xc00018c370) Reply frame received for 3\nI0904 13:40:42.144049 1488 log.go:181] (0xc00018c370) (0xc0008e32c0) Create stream\nI0904 13:40:42.144064 1488 log.go:181] (0xc00018c370) (0xc0008e32c0) Stream added, broadcasting: 5\nI0904 13:40:42.145353 1488 log.go:181] (0xc00018c370) Reply frame received for 5\nI0904 13:40:42.199313 1488 log.go:181] (0xc00018c370) Data frame received for 3\nI0904 13:40:42.199334 1488 log.go:181] (0xc0008e21e0) (3) Data frame handling\nI0904 13:40:42.199341 1488 log.go:181] (0xc0008e21e0) (3) Data frame sent\nI0904 13:40:42.199547 1488 log.go:181] (0xc00018c370) Data frame received for 5\nI0904 13:40:42.199568 1488 log.go:181] (0xc0008e32c0) (5) Data frame handling\nI0904 13:40:42.199584 1488 log.go:181] (0xc0008e32c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 13:40:42.199641 1488 log.go:181] (0xc00018c370) Data frame received for 3\nI0904 13:40:42.199703 1488 log.go:181] (0xc0008e21e0) (3) Data frame handling\nI0904 13:40:42.199752 1488 log.go:181] (0xc00018c370) Data frame received for 5\nI0904 13:40:42.199785 1488 log.go:181] (0xc0008e32c0) (5) Data frame handling\nI0904 13:40:42.201201 1488 log.go:181] (0xc00018c370) Data frame received for 1\nI0904 13:40:42.201212 1488 log.go:181] (0xc000a1e000) (1) Data frame handling\nI0904 13:40:42.201218 1488 log.go:181] (0xc000a1e000) (1) Data frame sent\nI0904 13:40:42.201389 1488 log.go:181] (0xc00018c370) (0xc000a1e000) Stream removed, broadcasting: 1\nI0904 13:40:42.201537 1488 log.go:181] (0xc00018c370) Go away received\nI0904 13:40:42.201910 1488 log.go:181] (0xc00018c370) (0xc000a1e000) Stream removed, broadcasting: 1\nI0904 13:40:42.201943 1488 log.go:181] (0xc00018c370) (0xc0008e21e0) Stream removed, broadcasting: 3\nI0904 13:40:42.201952 1488 log.go:181] (0xc00018c370) (0xc0008e32c0) Stream removed, broadcasting: 5\n" Sep 4 13:40:42.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 13:40:42.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 13:40:42.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 13:40:42.442: INFO: stderr: "I0904 13:40:42.350485 1506 log.go:181] (0xc000fa3550) (0xc000f10a00) Create stream\nI0904 13:40:42.350571 1506 log.go:181] (0xc000fa3550) (0xc000f10a00) Stream added, broadcasting: 1\nI0904 13:40:42.355587 1506 log.go:181] (0xc000fa3550) Reply frame received for 1\nI0904 13:40:42.355632 1506 log.go:181] (0xc000fa3550) (0xc0006d8820) Create stream\nI0904 13:40:42.355643 1506 log.go:181] (0xc000fa3550) (0xc0006d8820) Stream added, broadcasting: 3\nI0904 13:40:42.356405 1506 log.go:181] (0xc000fa3550) Reply frame received for 3\nI0904 13:40:42.356455 1506 log.go:181] (0xc000fa3550) (0xc0006d8aa0) Create stream\nI0904 13:40:42.356471 1506 log.go:181] (0xc000fa3550) (0xc0006d8aa0) Stream added, broadcasting: 5\nI0904 13:40:42.357392 1506 log.go:181] (0xc000fa3550) Reply frame received for 5\nI0904 13:40:42.428952 1506 log.go:181] (0xc000fa3550) Data frame received for 3\nI0904 13:40:42.428979 1506 log.go:181] (0xc0006d8820) (3) Data frame handling\nI0904 13:40:42.428990 1506 log.go:181] (0xc0006d8820) (3) Data frame sent\nI0904 13:40:42.429012 1506 log.go:181] (0xc000fa3550) Data frame received for 5\nI0904 13:40:42.429018 1506 log.go:181] (0xc0006d8aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0904 13:40:42.429034 1506 log.go:181] (0xc000fa3550) Data frame received for 3\nI0904 13:40:42.429055 1506 log.go:181] (0xc0006d8820) (3) Data frame handling\nI0904 13:40:42.429077 1506 log.go:181] (0xc0006d8aa0) (5) Data frame sent\nI0904 13:40:42.429089 1506 log.go:181] (0xc000fa3550) Data frame received for 5\nI0904 13:40:42.429100 1506 log.go:181] (0xc0006d8aa0) (5) Data frame handling\nI0904 13:40:42.430421 1506 log.go:181] (0xc000fa3550) Data frame received for 1\nI0904 13:40:42.430438 1506 log.go:181] (0xc000f10a00) (1) Data frame handling\nI0904 13:40:42.430455 1506 log.go:181] (0xc000f10a00) (1) Data frame sent\nI0904 13:40:42.430475 1506 log.go:181] (0xc000fa3550) (0xc000f10a00) Stream removed, broadcasting: 1\nI0904 13:40:42.430759 1506 log.go:181] (0xc000fa3550) Go away received\nI0904 13:40:42.430814 1506 log.go:181] (0xc000fa3550) (0xc000f10a00) Stream removed, broadcasting: 1\nI0904 13:40:42.430829 1506 log.go:181] (0xc000fa3550) (0xc0006d8820) Stream removed, broadcasting: 3\nI0904 13:40:42.430836 1506 log.go:181] (0xc000fa3550) (0xc0006d8aa0) Stream removed, broadcasting: 5\n" Sep 4 13:40:42.442: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 13:40:42.442: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 13:40:42.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 13:40:42.737: INFO: stderr: "I0904 13:40:42.667020 1524 log.go:181] (0xc00092d1e0) (0xc000832460) Create stream\nI0904 13:40:42.667074 1524 log.go:181] (0xc00092d1e0) (0xc000832460) Stream added, broadcasting: 1\nI0904 13:40:42.669358 1524 log.go:181] (0xc00092d1e0) Reply frame received for 1\nI0904 13:40:42.669383 1524 log.go:181] (0xc00092d1e0) (0xc000c6a5a0) Create stream\nI0904 13:40:42.669400 1524 log.go:181] (0xc00092d1e0) (0xc000c6a5a0) Stream added, broadcasting: 3\nI0904 13:40:42.670318 1524 log.go:181] (0xc00092d1e0) Reply frame received for 3\nI0904 13:40:42.670361 1524 log.go:181] (0xc00092d1e0) (0xc000c6a640) Create stream\nI0904 13:40:42.670386 1524 log.go:181] (0xc00092d1e0) (0xc000c6a640) Stream added, broadcasting: 5\nI0904 13:40:42.671294 1524 log.go:181] (0xc00092d1e0) Reply frame received for 5\nI0904 13:40:42.730409 1524 log.go:181] (0xc00092d1e0) Data frame received for 5\nI0904 13:40:42.730458 1524 log.go:181] (0xc00092d1e0) Data frame received for 3\nI0904 13:40:42.730503 1524 log.go:181] (0xc000c6a5a0) (3) Data frame handling\nI0904 13:40:42.730521 1524 log.go:181] (0xc000c6a5a0) (3) Data frame sent\nI0904 13:40:42.730529 1524 log.go:181] (0xc00092d1e0) Data frame received for 3\nI0904 13:40:42.730534 1524 log.go:181] (0xc000c6a5a0) (3) Data frame handling\nI0904 13:40:42.730558 1524 log.go:181] (0xc000c6a640) (5) Data frame handling\nI0904 13:40:42.730589 1524 log.go:181] (0xc000c6a640) (5) Data frame sent\nI0904 13:40:42.730607 1524 log.go:181] (0xc00092d1e0) Data frame received for 5\nI0904 13:40:42.730617 1524 log.go:181] (0xc000c6a640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0904 13:40:42.732113 1524 log.go:181] (0xc00092d1e0) Data frame received for 1\nI0904 13:40:42.732147 1524 log.go:181] (0xc000832460) (1) Data frame handling\nI0904 13:40:42.732179 1524 log.go:181] (0xc000832460) (1) Data frame sent\nI0904 13:40:42.732241 1524 log.go:181] (0xc00092d1e0) (0xc000832460) Stream removed, broadcasting: 1\nI0904 13:40:42.732267 1524 log.go:181] (0xc00092d1e0) Go away received\nI0904 13:40:42.732587 1524 log.go:181] (0xc00092d1e0) (0xc000832460) Stream removed, broadcasting: 1\nI0904 13:40:42.732603 1524 log.go:181] (0xc00092d1e0) (0xc000c6a5a0) Stream removed, broadcasting: 3\nI0904 13:40:42.732610 1524 log.go:181] (0xc00092d1e0) (0xc000c6a640) Stream removed, broadcasting: 5\n" Sep 4 13:40:42.737: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 13:40:42.737: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 13:40:42.741: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 13:40:42.741: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 13:40:42.741: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 4 13:40:42.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 13:40:42.982: INFO: stderr: "I0904 13:40:42.888509 1541 log.go:181] (0xc0008a5130) (0xc00089c6e0) Create stream\nI0904 13:40:42.888581 1541 log.go:181] (0xc0008a5130) (0xc00089c6e0) Stream added, broadcasting: 1\nI0904 13:40:42.894068 1541 log.go:181] (0xc0008a5130) Reply frame received for 1\nI0904 13:40:42.894120 1541 log.go:181] (0xc0008a5130) (0xc000f86000) Create stream\nI0904 13:40:42.894138 1541 log.go:181] (0xc0008a5130) (0xc000f86000) Stream added, broadcasting: 3\nI0904 13:40:42.895032 1541 log.go:181] (0xc0008a5130) Reply frame received for 3\nI0904 13:40:42.895094 1541 log.go:181] (0xc0008a5130) (0xc00089c000) Create stream\nI0904 13:40:42.895127 1541 log.go:181] (0xc0008a5130) (0xc00089c000) Stream added, broadcasting: 5\nI0904 13:40:42.896070 1541 log.go:181] (0xc0008a5130) Reply frame received for 5\nI0904 13:40:42.970081 1541 log.go:181] (0xc0008a5130) Data frame received for 5\nI0904 13:40:42.970114 1541 log.go:181] (0xc00089c000) (5) Data frame handling\nI0904 13:40:42.970134 1541 log.go:181] (0xc00089c000) (5) Data frame sent\nI0904 13:40:42.970144 1541 log.go:181] (0xc0008a5130) Data frame received for 5\nI0904 13:40:42.970153 1541 log.go:181] (0xc00089c000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 13:40:42.970177 1541 log.go:181] (0xc0008a5130) Data frame received for 3\nI0904 13:40:42.970187 1541 log.go:181] (0xc000f86000) (3) Data frame handling\nI0904 13:40:42.970203 1541 log.go:181] (0xc000f86000) (3) Data frame sent\nI0904 13:40:42.970268 1541 log.go:181] (0xc0008a5130) Data frame received for 3\nI0904 13:40:42.970297 1541 log.go:181] (0xc000f86000) (3) Data frame handling\nI0904 13:40:42.971576 1541 log.go:181] (0xc0008a5130) Data frame received for 1\nI0904 13:40:42.971598 1541 log.go:181] (0xc00089c6e0) (1) Data frame handling\nI0904 13:40:42.971610 1541 log.go:181] (0xc00089c6e0) (1) Data frame sent\nI0904 13:40:42.971628 1541 log.go:181] (0xc0008a5130) (0xc00089c6e0) Stream removed, broadcasting: 1\nI0904 13:40:42.971743 1541 log.go:181] (0xc0008a5130) Go away received\nI0904 13:40:42.972018 1541 log.go:181] (0xc0008a5130) (0xc00089c6e0) Stream removed, broadcasting: 1\nI0904 13:40:42.972032 1541 log.go:181] (0xc0008a5130) (0xc000f86000) Stream removed, broadcasting: 3\nI0904 13:40:42.972039 1541 log.go:181] (0xc0008a5130) (0xc00089c000) Stream removed, broadcasting: 5\n" Sep 4 13:40:42.982: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 13:40:42.982: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 13:40:42.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 13:40:43.269: INFO: stderr: "I0904 13:40:43.128539 1559 log.go:181] (0xc000d46dc0) (0xc000133220) Create stream\nI0904 13:40:43.128605 1559 log.go:181] (0xc000d46dc0) (0xc000133220) Stream added, broadcasting: 1\nI0904 13:40:43.137096 1559 log.go:181] (0xc000d46dc0) Reply frame received for 1\nI0904 13:40:43.137158 1559 log.go:181] (0xc000d46dc0) (0xc0001321e0) Create stream\nI0904 13:40:43.137170 1559 log.go:181] (0xc000d46dc0) (0xc0001321e0) Stream added, broadcasting: 3\nI0904 13:40:43.138846 1559 log.go:181] (0xc000d46dc0) Reply frame received for 3\nI0904 13:40:43.138878 1559 log.go:181] (0xc000d46dc0) (0xc000718320) Create stream\nI0904 13:40:43.138891 1559 log.go:181] (0xc000d46dc0) (0xc000718320) Stream added, broadcasting: 5\nI0904 13:40:43.139720 1559 log.go:181] (0xc000d46dc0) Reply frame received for 5\nI0904 13:40:43.213067 1559 log.go:181] (0xc000d46dc0) Data frame received for 5\nI0904 13:40:43.213100 1559 log.go:181] (0xc000718320) (5) Data frame handling\nI0904 13:40:43.213118 1559 log.go:181] (0xc000718320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 13:40:43.259104 1559 log.go:181] (0xc000d46dc0) Data frame received for 5\nI0904 13:40:43.259134 1559 log.go:181] (0xc000718320) (5) Data frame handling\nI0904 13:40:43.259152 1559 log.go:181] (0xc000d46dc0) Data frame received for 3\nI0904 13:40:43.259159 1559 log.go:181] (0xc0001321e0) (3) Data frame handling\nI0904 13:40:43.259168 1559 log.go:181] (0xc0001321e0) (3) Data frame sent\nI0904 13:40:43.259175 1559 log.go:181] (0xc000d46dc0) Data frame received for 3\nI0904 13:40:43.259181 1559 log.go:181] (0xc0001321e0) (3) Data frame handling\nI0904 13:40:43.260547 1559 log.go:181] (0xc000d46dc0) Data frame received for 1\nI0904 13:40:43.260592 1559 log.go:181] (0xc000133220) (1) Data frame handling\nI0904 13:40:43.260608 1559 log.go:181] (0xc000133220) (1) Data frame sent\nI0904 13:40:43.260634 1559 log.go:181] (0xc000d46dc0) (0xc000133220) Stream removed, broadcasting: 1\nI0904 13:40:43.260647 1559 log.go:181] (0xc000d46dc0) Go away received\nI0904 13:40:43.261057 1559 log.go:181] (0xc000d46dc0) (0xc000133220) Stream removed, broadcasting: 1\nI0904 13:40:43.261073 1559 log.go:181] (0xc000d46dc0) (0xc0001321e0) Stream removed, broadcasting: 3\nI0904 13:40:43.261083 1559 log.go:181] (0xc000d46dc0) (0xc000718320) Stream removed, broadcasting: 5\n" Sep 4 13:40:43.269: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 13:40:43.269: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 13:40:43.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1577 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 13:40:43.511: INFO: stderr: "I0904 13:40:43.406436 1577 log.go:181] (0xc000e02fd0) (0xc000615360) Create stream\nI0904 13:40:43.406499 1577 log.go:181] (0xc000e02fd0) (0xc000615360) Stream added, broadcasting: 1\nI0904 13:40:43.408653 1577 log.go:181] (0xc000e02fd0) Reply frame received for 1\nI0904 13:40:43.408672 1577 log.go:181] (0xc000e02fd0) (0xc000615ae0) Create stream\nI0904 13:40:43.408680 1577 log.go:181] (0xc000e02fd0) (0xc000615ae0) Stream added, broadcasting: 3\nI0904 13:40:43.409802 1577 log.go:181] (0xc000e02fd0) Reply frame received for 3\nI0904 13:40:43.409855 1577 log.go:181] (0xc000e02fd0) (0xc00088c500) Create stream\nI0904 13:40:43.409873 1577 log.go:181] (0xc000e02fd0) (0xc00088c500) Stream added, broadcasting: 5\nI0904 13:40:43.411100 1577 log.go:181] (0xc000e02fd0) Reply frame received for 5\nI0904 13:40:43.472468 1577 log.go:181] (0xc000e02fd0) Data frame received for 5\nI0904 13:40:43.472517 1577 log.go:181] (0xc00088c500) (5) Data frame handling\nI0904 13:40:43.472532 1577 log.go:181] (0xc00088c500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 13:40:43.499536 1577 log.go:181] (0xc000e02fd0) Data frame received for 3\nI0904 13:40:43.499570 1577 log.go:181] (0xc000615ae0) (3) Data frame handling\nI0904 13:40:43.499597 1577 log.go:181] (0xc000615ae0) (3) Data frame sent\nI0904 13:40:43.499905 1577 log.go:181] (0xc000e02fd0) Data frame received for 5\nI0904 13:40:43.499947 1577 log.go:181] (0xc00088c500) (5) Data frame handling\nI0904 13:40:43.499978 1577 log.go:181] (0xc000e02fd0) Data frame received for 3\nI0904 13:40:43.500008 1577 log.go:181] (0xc000615ae0) (3) Data frame handling\nI0904 13:40:43.503780 1577 log.go:181] (0xc000e02fd0) Data frame received for 1\nI0904 13:40:43.503814 1577 log.go:181] (0xc000615360) (1) Data frame handling\nI0904 13:40:43.503837 1577 log.go:181] (0xc000615360) (1) Data frame sent\nI0904 13:40:43.503862 1577 log.go:181] (0xc000e02fd0) (0xc000615360) Stream removed, broadcasting: 1\nI0904 13:40:43.503929 1577 log.go:181] (0xc000e02fd0) Go away received\nI0904 13:40:43.504411 1577 log.go:181] (0xc000e02fd0) (0xc000615360) Stream removed, broadcasting: 1\nI0904 13:40:43.504449 1577 log.go:181] (0xc000e02fd0) (0xc000615ae0) Stream removed, broadcasting: 3\nI0904 13:40:43.504461 1577 log.go:181] (0xc000e02fd0) (0xc00088c500) Stream removed, broadcasting: 5\n" Sep 4 13:40:43.511: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 13:40:43.511: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 13:40:43.511: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 13:40:43.558: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 4 13:40:53.564: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 4 13:40:53.564: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 4 13:40:53.564: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 4 13:40:53.628: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:53.628: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:53.628: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:53.628: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:53.628: INFO: Sep 4 13:40:53.628: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 4 13:40:54.708: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:54.709: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:54.709: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:54.709: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:54.709: INFO: Sep 4 13:40:54.709: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 4 13:40:55.786: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:55.786: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:55.786: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:55.787: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:55.787: INFO: Sep 4 13:40:55.787: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 4 13:40:56.791: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:56.792: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:56.792: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:56.792: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:56.792: INFO: Sep 4 13:40:56.792: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 4 13:40:57.815: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:57.815: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:57.815: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:57.815: INFO: Sep 4 13:40:57.816: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 4 13:40:58.820: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:58.820: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:58.820: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:31 +0000 UTC }] Sep 4 13:40:58.820: INFO: Sep 4 13:40:58.820: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 4 13:40:59.827: INFO: POD NODE PHASE GRACE CONDITIONS Sep 4 13:40:59.827: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-04 13:40:11 +0000 UTC }] Sep 4 13:40:59.827: INFO: Sep 4 13:40:59.827: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 4 13:41:00.844: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.742676196s Sep 4 13:41:01.848: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.72504829s Sep 4 13:41:02.853: INFO: Verifying statefulset ss doesn't scale past 0 for another 720.944695ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1577 Sep 4 13:41:03.857: INFO: Scaling statefulset ss to 0 Sep 4 13:41:03.867: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 13:41:03.869: INFO: Deleting all statefulset in ns statefulset-1577 Sep 4 13:41:03.871: INFO: Scaling statefulset ss to 0 Sep 4 13:41:03.878: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 13:41:03.880: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:03.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1577" for this suite. • [SLOW TEST:52.816 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":98,"skipped":1645,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:03.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 4 13:41:04.046: INFO: Waiting up to 5m0s for pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7" in namespace "emptydir-5643" to be "Succeeded or Failed" Sep 4 13:41:04.050: INFO: Pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190056ms Sep 4 13:41:06.407: INFO: Pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361054823s Sep 4 13:41:08.411: INFO: Pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7": Phase="Running", Reason="", readiness=true. Elapsed: 4.36497704s Sep 4 13:41:10.416: INFO: Pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369763459s STEP: Saw pod success Sep 4 13:41:10.416: INFO: Pod "pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7" satisfied condition "Succeeded or Failed" Sep 4 13:41:10.420: INFO: Trying to get logs from node latest-worker2 pod pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7 container test-container: STEP: delete the pod Sep 4 13:41:10.555: INFO: Waiting for pod pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7 to disappear Sep 4 13:41:10.568: INFO: Pod pod-64f431cb-0a06-4ab1-933d-f7294fcd8da7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:10.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5643" for this suite. • [SLOW TEST:6.673 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1648,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:10.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:14.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7035" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":100,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:14.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:41:15.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553" in namespace "projected-2118" to be "Succeeded or Failed" Sep 4 13:41:15.713: INFO: Pod "downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553": Phase="Pending", Reason="", readiness=false. Elapsed: 51.303045ms Sep 4 13:41:18.007: INFO: Pod "downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34486067s Sep 4 13:41:20.193: INFO: Pod "downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.531016367s STEP: Saw pod success Sep 4 13:41:20.193: INFO: Pod "downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553" satisfied condition "Succeeded or Failed" Sep 4 13:41:20.349: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553 container client-container: STEP: delete the pod Sep 4 13:41:20.428: INFO: Waiting for pod downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553 to disappear Sep 4 13:41:20.433: INFO: Pod downwardapi-volume-62423879-bb65-45fd-9b07-bc35e4b95553 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:20.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2118" for this suite. • [SLOW TEST:5.647 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1720,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:20.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 4 13:41:20.661: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 4 13:41:33.161: INFO: >>> kubeConfig: /root/.kube/config Sep 4 13:41:36.127: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:47.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3198" for this suite. • [SLOW TEST:27.052 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":102,"skipped":1726,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:47.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d0bdcb0c-3e52-4a97-a3fb-5ba472a30f7b STEP: Creating a pod to test consume configMaps Sep 4 13:41:47.650: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c" in namespace "projected-7888" to be "Succeeded or Failed" Sep 4 13:41:47.673: INFO: Pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.407955ms Sep 4 13:41:49.920: INFO: Pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270581645s Sep 4 13:41:51.924: INFO: Pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c": Phase="Running", Reason="", readiness=true. Elapsed: 4.274190724s Sep 4 13:41:53.929: INFO: Pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278869211s STEP: Saw pod success Sep 4 13:41:53.929: INFO: Pod "pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c" satisfied condition "Succeeded or Failed" Sep 4 13:41:53.932: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c container projected-configmap-volume-test: STEP: delete the pod Sep 4 13:41:54.008: INFO: Waiting for pod pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c to disappear Sep 4 13:41:54.025: INFO: Pod pod-projected-configmaps-2bdbf09e-b90b-4189-a37e-c661e297de3c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:41:54.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7888" for this suite. • [SLOW TEST:6.483 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1742,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:41:54.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 4 13:41:54.137: INFO: >>> kubeConfig: /root/.kube/config Sep 4 13:41:56.163: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:42:07.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2819" for this suite. • [SLOW TEST:13.600 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":104,"skipped":1756,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:42:07.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 4 13:42:14.274: INFO: Successfully updated pod "annotationupdate9a621cd5-2741-445b-9af6-a39b62f4982e" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:42:16.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4071" for this suite. • [SLOW TEST:8.695 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1762,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:42:16.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-b545caef-ec03-4100-82ce-013b608b21f5 STEP: Creating a pod to test consume configMaps Sep 4 13:42:16.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2" in namespace "configmap-9833" to be "Succeeded or Failed" Sep 4 13:42:16.543: INFO: Pod "pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.109459ms Sep 4 13:42:18.552: INFO: Pod "pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034079636s Sep 4 13:42:20.556: INFO: Pod "pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038000103s STEP: Saw pod success Sep 4 13:42:20.556: INFO: Pod "pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2" satisfied condition "Succeeded or Failed" Sep 4 13:42:20.559: INFO: Trying to get logs from node latest-worker pod pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2 container configmap-volume-test: STEP: delete the pod Sep 4 13:42:20.792: INFO: Waiting for pod pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2 to disappear Sep 4 13:42:20.828: INFO: Pod pod-configmaps-28b562da-db31-4bf1-a7c5-f2c49d886ee2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:42:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9833" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:42:20.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 4 13:42:21.076: INFO: starting watch STEP: patching STEP: updating Sep 4 13:42:21.146: INFO: waiting for watch events with expected annotations Sep 4 13:42:21.146: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:42:21.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5903" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":107,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:42:21.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:42:21.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5383' Sep 4 13:42:21.743: INFO: stderr: "" Sep 4 13:42:21.743: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 4 13:42:21.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5383' Sep 4 13:42:22.263: INFO: stderr: "" Sep 4 13:42:22.263: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 4 13:42:23.267: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:42:23.267: INFO: Found 0 / 1 Sep 4 13:42:24.434: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:42:24.434: INFO: Found 0 / 1 Sep 4 13:42:25.266: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:42:25.266: INFO: Found 0 / 1 Sep 4 13:42:26.277: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:42:26.277: INFO: Found 1 / 1 Sep 4 13:42:26.277: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 4 13:42:26.279: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:42:26.279: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 4 13:42:26.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe pod agnhost-primary-8jkqv --namespace=kubectl-5383' Sep 4 13:42:26.466: INFO: stderr: "" Sep 4 13:42:26.466: INFO: stdout: "Name: agnhost-primary-8jkqv\nNamespace: kubectl-5383\nPriority: 0\nNode: latest-worker/172.18.0.11\nStart Time: Fri, 04 Sep 2020 13:42:22 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.148\nIPs:\n IP: 10.244.2.148\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://ec4fcfe81d983cfc7f59f8333a9c59038e72d620837e9fcf2c24e78cb6b2a401\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 04 Sep 2020 13:42:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rrb4s (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rrb4s:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rrb4s\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s Successfully assigned kubectl-5383/agnhost-primary-8jkqv to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-primary\n Normal Started 2s kubelet, latest-worker Started container agnhost-primary\n" Sep 4 13:42:26.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-5383' Sep 4 13:42:26.609: INFO: stderr: "" Sep 4 13:42:26.609: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5383\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-8jkqv\n" Sep 4 13:42:26.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-5383' Sep 4 13:42:26.737: INFO: stderr: "" Sep 4 13:42:26.737: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5383\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.101.220.201\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.148:6379\nSession Affinity: None\nEvents: \n" Sep 4 13:42:26.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe node latest-control-plane' Sep 4 13:42:26.869: INFO: stderr: "" Sep 4 13:42:26.869: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:42:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Fri, 04 Sep 2020 13:42:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 04 Sep 2020 13:42:14 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 04 Sep 2020 13:42:14 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 04 Sep 2020 13:42:14 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 04 Sep 2020 13:42:14 +0000 Sat, 15 Aug 2020 09:42:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 355da13825784523b4a253c23edd1334\n System UUID: 8f367e0f-042b-45ff-9966-5ca6bcc1cc56\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-f7hdg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 20d\n kube-system coredns-f9fd979d6-vxzgb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 20d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kindnet-qmj2d 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 20d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-proxy-8zfjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 20d\n local-path-storage local-path-provisioner-8b46957d4-csnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 4 13:42:26.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe namespace kubectl-5383' Sep 4 13:42:26.975: INFO: stderr: "" Sep 4 13:42:26.975: INFO: stdout: "Name: kubectl-5383\nLabels: e2e-framework=kubectl\n e2e-run=54d8b692-ad73-47a4-be0a-850bae8fa01e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:42:26.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5383" for this suite. • [SLOW TEST:5.772 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":108,"skipped":1855,"failed":0} SSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:42:27.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9569 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9569;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9569 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9569;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9569.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9569.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9569.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9569.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9569.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9569.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9569.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 93.68.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.68.93_udp@PTR;check="$$(dig +tcp +noall +answer +search 93.68.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.68.93_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9569 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9569;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9569 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9569;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9569.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9569.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9569.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9569.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9569.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9569.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9569.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9569.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9569.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 93.68.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.68.93_udp@PTR;check="$$(dig +tcp +noall +answer +search 93.68.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.68.93_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 13:42:35.379: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.381: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.386: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.389: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.391: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.393: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.396: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.443: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.449: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.451: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.453: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.456: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.458: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.462: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:35.476: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:42:40.481: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.485: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.491: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.494: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.497: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.500: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.503: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.521: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.523: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.526: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.528: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.530: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.532: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.535: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.537: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:40.552: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:42:45.482: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.487: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.491: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.494: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.497: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.503: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.509: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.511: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.528: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.530: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.532: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.535: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.537: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.539: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.543: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.546: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:45.566: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:42:50.482: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.536: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.539: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.549: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.552: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.555: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.575: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.578: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.580: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.585: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.591: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:50.614: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:42:55.482: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.486: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.524: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.548: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.550: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.553: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.555: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.558: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.560: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.563: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:42:55.582: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:43:00.480: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.484: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.486: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.495: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.499: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.502: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.539: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.542: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.545: INFO: Unable to read jessie_udp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569 from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.551: INFO: Unable to read jessie_udp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.557: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc from pod dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255: the server could not find the requested resource (get pods dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255) Sep 4 13:43:00.578: INFO: Lookups using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9569 wheezy_tcp@dns-test-service.dns-9569 wheezy_udp@dns-test-service.dns-9569.svc wheezy_tcp@dns-test-service.dns-9569.svc wheezy_udp@_http._tcp.dns-test-service.dns-9569.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9569.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9569 jessie_tcp@dns-test-service.dns-9569 jessie_udp@dns-test-service.dns-9569.svc jessie_tcp@dns-test-service.dns-9569.svc jessie_udp@_http._tcp.dns-test-service.dns-9569.svc jessie_tcp@_http._tcp.dns-test-service.dns-9569.svc] Sep 4 13:43:05.606: INFO: DNS probes using dns-9569/dns-test-a557a299-2b97-4a33-b2d1-800db9bf6255 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:43:06.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9569" for this suite. • [SLOW TEST:39.532 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":109,"skipped":1859,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:43:06.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6471 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6471 STEP: creating replication controller externalsvc in namespace services-6471 I0904 13:43:07.369388 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6471, replica count: 2 I0904 13:43:10.419828 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:43:13.420042 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 4 13:43:13.463: INFO: Creating new exec pod Sep 4 13:43:17.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6471 execpod8zkxl -- /bin/sh -x -c nslookup clusterip-service.services-6471.svc.cluster.local' Sep 4 13:43:21.042: INFO: stderr: "I0904 13:43:20.915287 1712 log.go:181] (0xc000bf40b0) (0xc000b9a140) Create stream\nI0904 13:43:20.915339 1712 log.go:181] (0xc000bf40b0) (0xc000b9a140) Stream added, broadcasting: 1\nI0904 13:43:20.917293 1712 log.go:181] (0xc000bf40b0) Reply frame received for 1\nI0904 13:43:20.917340 1712 log.go:181] (0xc000bf40b0) (0xc000b9a1e0) Create stream\nI0904 13:43:20.917353 1712 log.go:181] (0xc000bf40b0) (0xc000b9a1e0) Stream added, broadcasting: 3\nI0904 13:43:20.918271 1712 log.go:181] (0xc000bf40b0) Reply frame received for 3\nI0904 13:43:20.918297 1712 log.go:181] (0xc000bf40b0) (0xc000998280) Create stream\nI0904 13:43:20.918304 1712 log.go:181] (0xc000bf40b0) (0xc000998280) Stream added, broadcasting: 5\nI0904 13:43:20.919053 1712 log.go:181] (0xc000bf40b0) Reply frame received for 5\nI0904 13:43:21.002551 1712 log.go:181] (0xc000bf40b0) Data frame received for 5\nI0904 13:43:21.002578 1712 log.go:181] (0xc000998280) (5) Data frame handling\nI0904 13:43:21.002593 1712 log.go:181] (0xc000998280) (5) Data frame sent\n+ nslookup clusterip-service.services-6471.svc.cluster.local\nI0904 13:43:21.027322 1712 log.go:181] (0xc000bf40b0) Data frame received for 3\nI0904 13:43:21.027382 1712 log.go:181] (0xc000b9a1e0) (3) Data frame handling\nI0904 13:43:21.027409 1712 log.go:181] (0xc000b9a1e0) (3) Data frame sent\nI0904 13:43:21.028087 1712 log.go:181] (0xc000bf40b0) Data frame received for 3\nI0904 13:43:21.028109 1712 log.go:181] (0xc000b9a1e0) (3) Data frame handling\nI0904 13:43:21.028132 1712 log.go:181] (0xc000b9a1e0) (3) Data frame sent\nI0904 13:43:21.028502 1712 log.go:181] (0xc000bf40b0) Data frame received for 3\nI0904 13:43:21.028534 1712 log.go:181] (0xc000b9a1e0) (3) Data frame handling\nI0904 13:43:21.028563 1712 log.go:181] (0xc000bf40b0) Data frame received for 5\nI0904 13:43:21.028587 1712 log.go:181] (0xc000998280) (5) Data frame handling\nI0904 13:43:21.030828 1712 log.go:181] (0xc000bf40b0) Data frame received for 1\nI0904 13:43:21.030867 1712 log.go:181] (0xc000b9a140) (1) Data frame handling\nI0904 13:43:21.030903 1712 log.go:181] (0xc000b9a140) (1) Data frame sent\nI0904 13:43:21.030953 1712 log.go:181] (0xc000bf40b0) (0xc000b9a140) Stream removed, broadcasting: 1\nI0904 13:43:21.031004 1712 log.go:181] (0xc000bf40b0) Go away received\nI0904 13:43:21.031594 1712 log.go:181] (0xc000bf40b0) (0xc000b9a140) Stream removed, broadcasting: 1\nI0904 13:43:21.031616 1712 log.go:181] (0xc000bf40b0) (0xc000b9a1e0) Stream removed, broadcasting: 3\nI0904 13:43:21.031627 1712 log.go:181] (0xc000bf40b0) (0xc000998280) Stream removed, broadcasting: 5\n" Sep 4 13:43:21.042: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6471.svc.cluster.local\tcanonical name = externalsvc.services-6471.svc.cluster.local.\nName:\texternalsvc.services-6471.svc.cluster.local\nAddress: 10.99.139.46\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6471, will wait for the garbage collector to delete the pods Sep 4 13:43:21.107: INFO: Deleting ReplicationController externalsvc took: 11.914183ms Sep 4 13:43:21.607: INFO: Terminating ReplicationController externalsvc pods took: 500.179446ms Sep 4 13:43:29.731: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:43:29.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6471" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.295 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":110,"skipped":1880,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:43:29.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:43:30.507: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:43:32.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:43:34.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823810, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:43:37.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:43:38.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2536" for this suite. STEP: Destroying namespace "webhook-2536-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":111,"skipped":1887,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:43:38.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5712 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 4 13:43:38.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 4 13:43:38.859: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:43:40.950: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:43:42.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:44.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:46.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:48.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:50.865: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:52.865: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:54.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:56.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:43:58.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:44:00.864: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 4 13:44:00.870: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:44:02.876: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 4 13:44:08.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.222:8080/dial?request=hostname&protocol=http&host=10.244.2.151&port=8080&tries=1'] Namespace:pod-network-test-5712 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:44:08.908: INFO: >>> kubeConfig: /root/.kube/config I0904 13:44:08.930177 7 log.go:181] (0xc002df4000) (0xc007b78c80) Create stream I0904 13:44:08.930205 7 log.go:181] (0xc002df4000) (0xc007b78c80) Stream added, broadcasting: 1 I0904 13:44:08.931701 7 log.go:181] (0xc002df4000) Reply frame received for 1 I0904 13:44:08.931735 7 log.go:181] (0xc002df4000) (0xc00600a320) Create stream I0904 13:44:08.931747 7 log.go:181] (0xc002df4000) (0xc00600a320) Stream added, broadcasting: 3 I0904 13:44:08.932631 7 log.go:181] (0xc002df4000) Reply frame received for 3 I0904 13:44:08.932670 7 log.go:181] (0xc002df4000) (0xc00600a3c0) Create stream I0904 13:44:08.932689 7 log.go:181] (0xc002df4000) (0xc00600a3c0) Stream added, broadcasting: 5 I0904 13:44:08.933809 7 log.go:181] (0xc002df4000) Reply frame received for 5 I0904 13:44:09.023373 7 log.go:181] (0xc002df4000) Data frame received for 3 I0904 13:44:09.023400 7 log.go:181] (0xc00600a320) (3) Data frame handling I0904 13:44:09.023416 7 log.go:181] (0xc00600a320) (3) Data frame sent I0904 13:44:09.024027 7 log.go:181] (0xc002df4000) Data frame received for 5 I0904 13:44:09.024042 7 log.go:181] (0xc00600a3c0) (5) Data frame handling I0904 13:44:09.024069 7 log.go:181] (0xc002df4000) Data frame received for 3 I0904 13:44:09.024089 7 log.go:181] (0xc00600a320) (3) Data frame handling I0904 13:44:09.025680 7 log.go:181] (0xc002df4000) Data frame received for 1 I0904 13:44:09.025692 7 log.go:181] (0xc007b78c80) (1) Data frame handling I0904 13:44:09.025707 7 log.go:181] (0xc007b78c80) (1) Data frame sent I0904 13:44:09.025717 7 log.go:181] (0xc002df4000) (0xc007b78c80) Stream removed, broadcasting: 1 I0904 13:44:09.025728 7 log.go:181] (0xc002df4000) Go away received I0904 13:44:09.025813 7 log.go:181] (0xc002df4000) (0xc007b78c80) Stream removed, broadcasting: 1 I0904 13:44:09.025829 7 log.go:181] (0xc002df4000) (0xc00600a320) Stream removed, broadcasting: 3 I0904 13:44:09.025840 7 log.go:181] (0xc002df4000) (0xc00600a3c0) Stream removed, broadcasting: 5 Sep 4 13:44:09.025: INFO: Waiting for responses: map[] Sep 4 13:44:09.028: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.222:8080/dial?request=hostname&protocol=http&host=10.244.1.221&port=8080&tries=1'] Namespace:pod-network-test-5712 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:44:09.028: INFO: >>> kubeConfig: /root/.kube/config I0904 13:44:09.056697 7 log.go:181] (0xc004f86420) (0xc00600a5a0) Create stream I0904 13:44:09.056841 7 log.go:181] (0xc004f86420) (0xc00600a5a0) Stream added, broadcasting: 1 I0904 13:44:09.058506 7 log.go:181] (0xc004f86420) Reply frame received for 1 I0904 13:44:09.058543 7 log.go:181] (0xc004f86420) (0xc005328320) Create stream I0904 13:44:09.058555 7 log.go:181] (0xc004f86420) (0xc005328320) Stream added, broadcasting: 3 I0904 13:44:09.059555 7 log.go:181] (0xc004f86420) Reply frame received for 3 I0904 13:44:09.059617 7 log.go:181] (0xc004f86420) (0xc0053283c0) Create stream I0904 13:44:09.059640 7 log.go:181] (0xc004f86420) (0xc0053283c0) Stream added, broadcasting: 5 I0904 13:44:09.060462 7 log.go:181] (0xc004f86420) Reply frame received for 5 I0904 13:44:09.126426 7 log.go:181] (0xc004f86420) Data frame received for 3 I0904 13:44:09.126472 7 log.go:181] (0xc005328320) (3) Data frame handling I0904 13:44:09.126498 7 log.go:181] (0xc005328320) (3) Data frame sent I0904 13:44:09.128294 7 log.go:181] (0xc004f86420) Data frame received for 5 I0904 13:44:09.128318 7 log.go:181] (0xc0053283c0) (5) Data frame handling I0904 13:44:09.130084 7 log.go:181] (0xc004f86420) Data frame received for 1 I0904 13:44:09.130117 7 log.go:181] (0xc00600a5a0) (1) Data frame handling I0904 13:44:09.130137 7 log.go:181] (0xc00600a5a0) (1) Data frame sent I0904 13:44:09.130148 7 log.go:181] (0xc004f86420) (0xc00600a5a0) Stream removed, broadcasting: 1 I0904 13:44:09.130169 7 log.go:181] (0xc004f86420) Data frame received for 3 I0904 13:44:09.130181 7 log.go:181] (0xc005328320) (3) Data frame handling I0904 13:44:09.130200 7 log.go:181] (0xc004f86420) Go away received I0904 13:44:09.130251 7 log.go:181] (0xc004f86420) (0xc00600a5a0) Stream removed, broadcasting: 1 I0904 13:44:09.130279 7 log.go:181] (0xc004f86420) (0xc005328320) Stream removed, broadcasting: 3 I0904 13:44:09.130300 7 log.go:181] (0xc004f86420) (0xc0053283c0) Stream removed, broadcasting: 5 Sep 4 13:44:09.130: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:09.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5712" for this suite. • [SLOW TEST:30.875 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1895,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:09.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 4 13:44:09.335: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:09.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6395" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":113,"skipped":1915,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:09.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7883f737-a201-4373-802b-054cac9800b0 STEP: Creating a pod to test consume secrets Sep 4 13:44:09.537: INFO: Waiting up to 5m0s for pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929" in namespace "secrets-9407" to be "Succeeded or Failed" Sep 4 13:44:09.542: INFO: Pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.821178ms Sep 4 13:44:11.591: INFO: Pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053718305s Sep 4 13:44:13.595: INFO: Pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057843736s Sep 4 13:44:15.806: INFO: Pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268685825s STEP: Saw pod success Sep 4 13:44:15.806: INFO: Pod "pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929" satisfied condition "Succeeded or Failed" Sep 4 13:44:15.807: INFO: Trying to get logs from node latest-worker pod pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929 container secret-env-test: STEP: delete the pod Sep 4 13:44:16.429: INFO: Waiting for pod pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929 to disappear Sep 4 13:44:16.448: INFO: Pod pod-secrets-e760839e-f6b4-44e7-b2b5-085c925ef929 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:16.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9407" for this suite. • [SLOW TEST:7.036 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1931,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-a6f7b331-dd3e-40f0-8c5c-1810c4262860 STEP: Creating a pod to test consume configMaps Sep 4 13:44:17.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364" in namespace "configmap-6044" to be "Succeeded or Failed" Sep 4 13:44:17.186: INFO: Pod "pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648711ms Sep 4 13:44:19.189: INFO: Pod "pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006376127s Sep 4 13:44:21.194: INFO: Pod "pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010770215s STEP: Saw pod success Sep 4 13:44:21.194: INFO: Pod "pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364" satisfied condition "Succeeded or Failed" Sep 4 13:44:21.197: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364 container configmap-volume-test: STEP: delete the pod Sep 4 13:44:21.328: INFO: Waiting for pod pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364 to disappear Sep 4 13:44:21.341: INFO: Pod pod-configmaps-c7bb8459-8016-4057-8fef-180c6cb2e364 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:21.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6044" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:21.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:44:21.498: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6770 I0904 13:44:21.520434 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6770, replica count: 1 I0904 13:44:22.570897 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:23.571148 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:24.571299 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:25.571531 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:44:25.701: INFO: Created: latency-svc-g4gpc Sep 4 13:44:25.724: INFO: Got endpoints: latency-svc-g4gpc [52.599809ms] Sep 4 13:44:25.824: INFO: Created: latency-svc-jp748 Sep 4 13:44:25.864: INFO: Created: latency-svc-rjbjb Sep 4 13:44:25.864: INFO: Got endpoints: latency-svc-jp748 [139.808103ms] Sep 4 13:44:25.881: INFO: Got endpoints: latency-svc-rjbjb [157.075944ms] Sep 4 13:44:25.917: INFO: Created: latency-svc-nszhg Sep 4 13:44:25.980: INFO: Got endpoints: latency-svc-nszhg [256.022377ms] Sep 4 13:44:25.985: INFO: Created: latency-svc-mg8vc Sep 4 13:44:25.996: INFO: Got endpoints: latency-svc-mg8vc [271.927735ms] Sep 4 13:44:26.028: INFO: Created: latency-svc-2t9jc Sep 4 13:44:26.147: INFO: Got endpoints: latency-svc-2t9jc [423.392931ms] Sep 4 13:44:26.149: INFO: Created: latency-svc-qhj58 Sep 4 13:44:26.157: INFO: Got endpoints: latency-svc-qhj58 [432.910795ms] Sep 4 13:44:26.184: INFO: Created: latency-svc-rnvfq Sep 4 13:44:26.200: INFO: Got endpoints: latency-svc-rnvfq [475.89316ms] Sep 4 13:44:26.226: INFO: Created: latency-svc-j8k6v Sep 4 13:44:26.236: INFO: Got endpoints: latency-svc-j8k6v [512.234115ms] Sep 4 13:44:26.298: INFO: Created: latency-svc-8gj46 Sep 4 13:44:26.301: INFO: Got endpoints: latency-svc-8gj46 [576.377909ms] Sep 4 13:44:26.343: INFO: Created: latency-svc-wsb9l Sep 4 13:44:26.357: INFO: Got endpoints: latency-svc-wsb9l [632.976857ms] Sep 4 13:44:26.381: INFO: Created: latency-svc-vvt7j Sep 4 13:44:26.477: INFO: Got endpoints: latency-svc-vvt7j [753.289899ms] Sep 4 13:44:26.481: INFO: Created: latency-svc-gt65d Sep 4 13:44:26.518: INFO: Got endpoints: latency-svc-gt65d [793.32074ms] Sep 4 13:44:26.519: INFO: Created: latency-svc-bnclk Sep 4 13:44:26.535: INFO: Got endpoints: latency-svc-bnclk [811.212307ms] Sep 4 13:44:26.640: INFO: Created: latency-svc-2zkvq Sep 4 13:44:26.662: INFO: Got endpoints: latency-svc-2zkvq [937.388894ms] Sep 4 13:44:26.764: INFO: Created: latency-svc-5nm46 Sep 4 13:44:26.776: INFO: Got endpoints: latency-svc-5nm46 [1.051241339s] Sep 4 13:44:26.872: INFO: Created: latency-svc-dkt26 Sep 4 13:44:26.877: INFO: Got endpoints: latency-svc-dkt26 [1.013523607s] Sep 4 13:44:26.914: INFO: Created: latency-svc-gn5kv Sep 4 13:44:26.943: INFO: Got endpoints: latency-svc-gn5kv [1.061288475s] Sep 4 13:44:27.022: INFO: Created: latency-svc-6sfwm Sep 4 13:44:27.029: INFO: Got endpoints: latency-svc-6sfwm [1.04864108s] Sep 4 13:44:27.066: INFO: Created: latency-svc-csxqw Sep 4 13:44:27.083: INFO: Got endpoints: latency-svc-csxqw [1.087098487s] Sep 4 13:44:27.118: INFO: Created: latency-svc-lxsxr Sep 4 13:44:27.196: INFO: Got endpoints: latency-svc-lxsxr [1.048258596s] Sep 4 13:44:27.203: INFO: Created: latency-svc-q49s7 Sep 4 13:44:27.210: INFO: Got endpoints: latency-svc-q49s7 [1.052645676s] Sep 4 13:44:27.233: INFO: Created: latency-svc-qwzxs Sep 4 13:44:27.252: INFO: Got endpoints: latency-svc-qwzxs [1.051552569s] Sep 4 13:44:27.275: INFO: Created: latency-svc-qrm57 Sep 4 13:44:27.339: INFO: Got endpoints: latency-svc-qrm57 [1.10266681s] Sep 4 13:44:27.366: INFO: Created: latency-svc-hnx7n Sep 4 13:44:27.384: INFO: Got endpoints: latency-svc-hnx7n [1.082984851s] Sep 4 13:44:27.418: INFO: Created: latency-svc-kfrnj Sep 4 13:44:27.433: INFO: Got endpoints: latency-svc-kfrnj [1.075735953s] Sep 4 13:44:27.496: INFO: Created: latency-svc-sxgts Sep 4 13:44:27.517: INFO: Got endpoints: latency-svc-sxgts [1.039169627s] Sep 4 13:44:27.632: INFO: Created: latency-svc-dxbpp Sep 4 13:44:27.643: INFO: Got endpoints: latency-svc-dxbpp [1.125027056s] Sep 4 13:44:27.699: INFO: Created: latency-svc-c4qgs Sep 4 13:44:27.724: INFO: Got endpoints: latency-svc-c4qgs [1.188478363s] Sep 4 13:44:27.841: INFO: Created: latency-svc-bl7c4 Sep 4 13:44:27.861: INFO: Got endpoints: latency-svc-bl7c4 [1.199083623s] Sep 4 13:44:27.861: INFO: Created: latency-svc-t7fsk Sep 4 13:44:27.877: INFO: Got endpoints: latency-svc-t7fsk [1.101374483s] Sep 4 13:44:27.898: INFO: Created: latency-svc-b268n Sep 4 13:44:28.034: INFO: Got endpoints: latency-svc-b268n [1.156544208s] Sep 4 13:44:28.038: INFO: Created: latency-svc-f5h9j Sep 4 13:44:28.049: INFO: Got endpoints: latency-svc-f5h9j [1.106273616s] Sep 4 13:44:28.084: INFO: Created: latency-svc-ps5ln Sep 4 13:44:28.102: INFO: Got endpoints: latency-svc-ps5ln [1.072972995s] Sep 4 13:44:28.220: INFO: Created: latency-svc-xtrth Sep 4 13:44:28.251: INFO: Got endpoints: latency-svc-xtrth [1.168358886s] Sep 4 13:44:28.253: INFO: Created: latency-svc-f6r6g Sep 4 13:44:28.270: INFO: Got endpoints: latency-svc-f6r6g [1.073949462s] Sep 4 13:44:28.363: INFO: Created: latency-svc-rr4zq Sep 4 13:44:28.372: INFO: Got endpoints: latency-svc-rr4zq [1.161958724s] Sep 4 13:44:28.422: INFO: Created: latency-svc-nwdpb Sep 4 13:44:28.432: INFO: Got endpoints: latency-svc-nwdpb [1.180217381s] Sep 4 13:44:28.526: INFO: Created: latency-svc-6p2jf Sep 4 13:44:28.531: INFO: Got endpoints: latency-svc-6p2jf [1.191938197s] Sep 4 13:44:28.564: INFO: Created: latency-svc-4t9cm Sep 4 13:44:28.583: INFO: Got endpoints: latency-svc-4t9cm [1.198956572s] Sep 4 13:44:28.693: INFO: Created: latency-svc-jrg2f Sep 4 13:44:28.761: INFO: Got endpoints: latency-svc-jrg2f [1.328130311s] Sep 4 13:44:28.847: INFO: Created: latency-svc-h4pr7 Sep 4 13:44:28.877: INFO: Got endpoints: latency-svc-h4pr7 [1.360373071s] Sep 4 13:44:28.878: INFO: Created: latency-svc-rgf42 Sep 4 13:44:28.907: INFO: Got endpoints: latency-svc-rgf42 [1.263830916s] Sep 4 13:44:29.009: INFO: Created: latency-svc-f6dvj Sep 4 13:44:29.015: INFO: Got endpoints: latency-svc-f6dvj [1.291201596s] Sep 4 13:44:29.051: INFO: Created: latency-svc-p7b59 Sep 4 13:44:29.064: INFO: Got endpoints: latency-svc-p7b59 [1.202878437s] Sep 4 13:44:29.087: INFO: Created: latency-svc-zj9kd Sep 4 13:44:29.101: INFO: Got endpoints: latency-svc-zj9kd [1.223274735s] Sep 4 13:44:29.154: INFO: Created: latency-svc-f5n76 Sep 4 13:44:29.187: INFO: Got endpoints: latency-svc-f5n76 [1.152938117s] Sep 4 13:44:29.188: INFO: Created: latency-svc-4zpff Sep 4 13:44:29.217: INFO: Got endpoints: latency-svc-4zpff [1.16804882s] Sep 4 13:44:29.327: INFO: Created: latency-svc-kmwpt Sep 4 13:44:29.361: INFO: Got endpoints: latency-svc-kmwpt [1.259765859s] Sep 4 13:44:29.363: INFO: Created: latency-svc-f2brp Sep 4 13:44:29.391: INFO: Got endpoints: latency-svc-f2brp [1.139810005s] Sep 4 13:44:29.484: INFO: Created: latency-svc-944q9 Sep 4 13:44:29.492: INFO: Got endpoints: latency-svc-944q9 [1.221730497s] Sep 4 13:44:29.513: INFO: Created: latency-svc-vldz9 Sep 4 13:44:29.538: INFO: Got endpoints: latency-svc-vldz9 [1.165723553s] Sep 4 13:44:29.567: INFO: Created: latency-svc-gjxcf Sep 4 13:44:29.581: INFO: Got endpoints: latency-svc-gjxcf [1.148705937s] Sep 4 13:44:29.666: INFO: Created: latency-svc-5p7q6 Sep 4 13:44:29.718: INFO: Got endpoints: latency-svc-5p7q6 [1.186515425s] Sep 4 13:44:29.759: INFO: Created: latency-svc-9ff7w Sep 4 13:44:29.836: INFO: Got endpoints: latency-svc-9ff7w [1.253092215s] Sep 4 13:44:29.879: INFO: Created: latency-svc-8tbh2 Sep 4 13:44:29.897: INFO: Got endpoints: latency-svc-8tbh2 [1.136002345s] Sep 4 13:44:30.021: INFO: Created: latency-svc-86c7z Sep 4 13:44:30.036: INFO: Got endpoints: latency-svc-86c7z [1.158267983s] Sep 4 13:44:30.063: INFO: Created: latency-svc-nn4kp Sep 4 13:44:30.165: INFO: Got endpoints: latency-svc-nn4kp [1.2583977s] Sep 4 13:44:30.169: INFO: Created: latency-svc-4xf8w Sep 4 13:44:30.180: INFO: Got endpoints: latency-svc-4xf8w [1.164879681s] Sep 4 13:44:30.219: INFO: Created: latency-svc-fg6ks Sep 4 13:44:30.235: INFO: Got endpoints: latency-svc-fg6ks [1.170537121s] Sep 4 13:44:30.352: INFO: Created: latency-svc-wsdfh Sep 4 13:44:30.366: INFO: Got endpoints: latency-svc-wsdfh [1.265854604s] Sep 4 13:44:30.394: INFO: Created: latency-svc-79fdw Sep 4 13:44:30.414: INFO: Got endpoints: latency-svc-79fdw [1.227378325s] Sep 4 13:44:30.435: INFO: Created: latency-svc-vvtz6 Sep 4 13:44:30.445: INFO: Got endpoints: latency-svc-vvtz6 [1.228132136s] Sep 4 13:44:30.510: INFO: Created: latency-svc-hvlzz Sep 4 13:44:30.533: INFO: Got endpoints: latency-svc-hvlzz [1.171596155s] Sep 4 13:44:30.566: INFO: Created: latency-svc-g7jw8 Sep 4 13:44:30.578: INFO: Got endpoints: latency-svc-g7jw8 [1.186347417s] Sep 4 13:44:30.687: INFO: Created: latency-svc-ws5m6 Sep 4 13:44:30.691: INFO: Got endpoints: latency-svc-ws5m6 [1.199896497s] Sep 4 13:44:30.762: INFO: Created: latency-svc-b968f Sep 4 13:44:30.777: INFO: Got endpoints: latency-svc-b968f [1.238744455s] Sep 4 13:44:30.833: INFO: Created: latency-svc-8dbv5 Sep 4 13:44:30.842: INFO: Got endpoints: latency-svc-8dbv5 [1.261193144s] Sep 4 13:44:30.903: INFO: Created: latency-svc-x9285 Sep 4 13:44:30.980: INFO: Got endpoints: latency-svc-x9285 [1.262617146s] Sep 4 13:44:31.019: INFO: Created: latency-svc-45mhb Sep 4 13:44:31.055: INFO: Got endpoints: latency-svc-45mhb [1.219280325s] Sep 4 13:44:31.130: INFO: Created: latency-svc-f78gf Sep 4 13:44:31.135: INFO: Got endpoints: latency-svc-f78gf [1.237750157s] Sep 4 13:44:31.198: INFO: Created: latency-svc-hcskn Sep 4 13:44:31.211: INFO: Got endpoints: latency-svc-hcskn [1.175181496s] Sep 4 13:44:31.229: INFO: Created: latency-svc-cztdz Sep 4 13:44:31.328: INFO: Got endpoints: latency-svc-cztdz [1.1623739s] Sep 4 13:44:31.331: INFO: Created: latency-svc-sslzj Sep 4 13:44:31.353: INFO: Got endpoints: latency-svc-sslzj [1.1730153s] Sep 4 13:44:31.389: INFO: Created: latency-svc-8v2gn Sep 4 13:44:31.495: INFO: Got endpoints: latency-svc-8v2gn [1.260710043s] Sep 4 13:44:31.498: INFO: Created: latency-svc-4z89c Sep 4 13:44:31.512: INFO: Got endpoints: latency-svc-4z89c [1.145352073s] Sep 4 13:44:31.535: INFO: Created: latency-svc-blbwd Sep 4 13:44:31.554: INFO: Got endpoints: latency-svc-blbwd [1.139538924s] Sep 4 13:44:31.651: INFO: Created: latency-svc-2s4fp Sep 4 13:44:31.656: INFO: Got endpoints: latency-svc-2s4fp [1.210357947s] Sep 4 13:44:31.689: INFO: Created: latency-svc-tmcmc Sep 4 13:44:31.704: INFO: Got endpoints: latency-svc-tmcmc [1.170339691s] Sep 4 13:44:31.743: INFO: Created: latency-svc-2xw25 Sep 4 13:44:31.800: INFO: Got endpoints: latency-svc-2xw25 [1.222265301s] Sep 4 13:44:31.823: INFO: Created: latency-svc-cw8vs Sep 4 13:44:31.865: INFO: Got endpoints: latency-svc-cw8vs [1.173832869s] Sep 4 13:44:31.889: INFO: Created: latency-svc-92tl8 Sep 4 13:44:31.980: INFO: Got endpoints: latency-svc-92tl8 [1.203328774s] Sep 4 13:44:32.021: INFO: Created: latency-svc-7kxjc Sep 4 13:44:32.045: INFO: Got endpoints: latency-svc-7kxjc [1.202622038s] Sep 4 13:44:32.131: INFO: Created: latency-svc-pr5jc Sep 4 13:44:32.139: INFO: Got endpoints: latency-svc-pr5jc [1.158045357s] Sep 4 13:44:32.210: INFO: Created: latency-svc-m4n9x Sep 4 13:44:32.340: INFO: Got endpoints: latency-svc-m4n9x [1.284669616s] Sep 4 13:44:32.364: INFO: Created: latency-svc-ftctq Sep 4 13:44:32.377: INFO: Got endpoints: latency-svc-ftctq [1.24189547s] Sep 4 13:44:32.409: INFO: Created: latency-svc-clzs4 Sep 4 13:44:32.426: INFO: Got endpoints: latency-svc-clzs4 [1.214747431s] Sep 4 13:44:32.484: INFO: Created: latency-svc-jxzfm Sep 4 13:44:32.507: INFO: Got endpoints: latency-svc-jxzfm [1.179246839s] Sep 4 13:44:32.544: INFO: Created: latency-svc-z4h2l Sep 4 13:44:32.552: INFO: Got endpoints: latency-svc-z4h2l [1.198478329s] Sep 4 13:44:32.577: INFO: Created: latency-svc-rmlwt Sep 4 13:44:32.669: INFO: Got endpoints: latency-svc-rmlwt [1.173596201s] Sep 4 13:44:32.674: INFO: Created: latency-svc-cr4lj Sep 4 13:44:32.691: INFO: Got endpoints: latency-svc-cr4lj [1.178878834s] Sep 4 13:44:32.747: INFO: Created: latency-svc-hw88n Sep 4 13:44:32.818: INFO: Got endpoints: latency-svc-hw88n [1.263854414s] Sep 4 13:44:32.847: INFO: Created: latency-svc-mvjhn Sep 4 13:44:32.863: INFO: Got endpoints: latency-svc-mvjhn [1.207784694s] Sep 4 13:44:32.888: INFO: Created: latency-svc-klg9s Sep 4 13:44:32.899: INFO: Got endpoints: latency-svc-klg9s [1.195376604s] Sep 4 13:44:32.968: INFO: Created: latency-svc-wklpd Sep 4 13:44:32.978: INFO: Got endpoints: latency-svc-wklpd [1.178118521s] Sep 4 13:44:32.998: INFO: Created: latency-svc-4grnf Sep 4 13:44:33.014: INFO: Got endpoints: latency-svc-4grnf [1.148299906s] Sep 4 13:44:33.039: INFO: Created: latency-svc-lvwmj Sep 4 13:44:33.130: INFO: Got endpoints: latency-svc-lvwmj [1.149926092s] Sep 4 13:44:33.143: INFO: Created: latency-svc-rgfmr Sep 4 13:44:33.158: INFO: Got endpoints: latency-svc-rgfmr [1.113725519s] Sep 4 13:44:33.179: INFO: Created: latency-svc-9hnx4 Sep 4 13:44:33.194: INFO: Got endpoints: latency-svc-9hnx4 [1.055714275s] Sep 4 13:44:33.215: INFO: Created: latency-svc-mxfsk Sep 4 13:44:33.274: INFO: Got endpoints: latency-svc-mxfsk [933.41963ms] Sep 4 13:44:33.291: INFO: Created: latency-svc-ldjwr Sep 4 13:44:33.321: INFO: Got endpoints: latency-svc-ldjwr [944.377567ms] Sep 4 13:44:33.346: INFO: Created: latency-svc-5wq5g Sep 4 13:44:33.363: INFO: Got endpoints: latency-svc-5wq5g [937.552474ms] Sep 4 13:44:33.429: INFO: Created: latency-svc-6tj2l Sep 4 13:44:33.483: INFO: Got endpoints: latency-svc-6tj2l [975.861239ms] Sep 4 13:44:33.485: INFO: Created: latency-svc-lflb6 Sep 4 13:44:33.513: INFO: Got endpoints: latency-svc-lflb6 [961.018456ms] Sep 4 13:44:33.591: INFO: Created: latency-svc-8qnhv Sep 4 13:44:33.605: INFO: Got endpoints: latency-svc-8qnhv [935.695602ms] Sep 4 13:44:33.642: INFO: Created: latency-svc-zfl5q Sep 4 13:44:33.659: INFO: Got endpoints: latency-svc-zfl5q [967.716763ms] Sep 4 13:44:33.771: INFO: Created: latency-svc-nc7fg Sep 4 13:44:33.771: INFO: Got endpoints: latency-svc-nc7fg [952.745463ms] Sep 4 13:44:33.962: INFO: Created: latency-svc-tljn6 Sep 4 13:44:34.018: INFO: Got endpoints: latency-svc-tljn6 [1.154972633s] Sep 4 13:44:34.020: INFO: Created: latency-svc-9j6p4 Sep 4 13:44:34.055: INFO: Got endpoints: latency-svc-9j6p4 [1.1561847s] Sep 4 13:44:34.150: INFO: Created: latency-svc-vgqgd Sep 4 13:44:34.163: INFO: Got endpoints: latency-svc-vgqgd [1.184666769s] Sep 4 13:44:34.199: INFO: Created: latency-svc-g97hc Sep 4 13:44:34.217: INFO: Got endpoints: latency-svc-g97hc [1.203586338s] Sep 4 13:44:34.299: INFO: Created: latency-svc-5qmbh Sep 4 13:44:34.329: INFO: Got endpoints: latency-svc-5qmbh [1.198592718s] Sep 4 13:44:34.330: INFO: Created: latency-svc-dw9cp Sep 4 13:44:34.379: INFO: Got endpoints: latency-svc-dw9cp [1.220051635s] Sep 4 13:44:34.443: INFO: Created: latency-svc-xbtr4 Sep 4 13:44:34.457: INFO: Got endpoints: latency-svc-xbtr4 [1.262246745s] Sep 4 13:44:34.505: INFO: Created: latency-svc-brt99 Sep 4 13:44:34.535: INFO: Got endpoints: latency-svc-brt99 [1.26178918s] Sep 4 13:44:34.615: INFO: Created: latency-svc-xg7vx Sep 4 13:44:34.627: INFO: Got endpoints: latency-svc-xg7vx [1.305358152s] Sep 4 13:44:34.795: INFO: Created: latency-svc-gvb8f Sep 4 13:44:34.814: INFO: Got endpoints: latency-svc-gvb8f [1.450539977s] Sep 4 13:44:34.870: INFO: Created: latency-svc-r9x9n Sep 4 13:44:34.886: INFO: Got endpoints: latency-svc-r9x9n [1.403071179s] Sep 4 13:44:34.950: INFO: Created: latency-svc-grtk4 Sep 4 13:44:35.049: INFO: Got endpoints: latency-svc-grtk4 [1.536412379s] Sep 4 13:44:35.157: INFO: Created: latency-svc-ngkqj Sep 4 13:44:35.162: INFO: Got endpoints: latency-svc-ngkqj [1.557349579s] Sep 4 13:44:35.211: INFO: Created: latency-svc-vtpgj Sep 4 13:44:35.227: INFO: Got endpoints: latency-svc-vtpgj [1.568781458s] Sep 4 13:44:35.281: INFO: Created: latency-svc-dmbxs Sep 4 13:44:35.288: INFO: Got endpoints: latency-svc-dmbxs [1.51720752s] Sep 4 13:44:35.328: INFO: Created: latency-svc-vnjzh Sep 4 13:44:35.342: INFO: Got endpoints: latency-svc-vnjzh [1.323452603s] Sep 4 13:44:35.447: INFO: Created: latency-svc-5pklq Sep 4 13:44:35.489: INFO: Created: latency-svc-42lmz Sep 4 13:44:35.491: INFO: Got endpoints: latency-svc-5pklq [1.436021612s] Sep 4 13:44:35.524: INFO: Got endpoints: latency-svc-42lmz [1.3605081s] Sep 4 13:44:35.591: INFO: Created: latency-svc-9h4ds Sep 4 13:44:35.662: INFO: Got endpoints: latency-svc-9h4ds [1.444555866s] Sep 4 13:44:35.663: INFO: Created: latency-svc-zzntp Sep 4 13:44:35.789: INFO: Got endpoints: latency-svc-zzntp [1.460255862s] Sep 4 13:44:35.819: INFO: Created: latency-svc-9ls2l Sep 4 13:44:35.835: INFO: Got endpoints: latency-svc-9ls2l [1.456293226s] Sep 4 13:44:35.868: INFO: Created: latency-svc-7cjtt Sep 4 13:44:35.962: INFO: Got endpoints: latency-svc-7cjtt [1.505837242s] Sep 4 13:44:35.979: INFO: Created: latency-svc-b9jzm Sep 4 13:44:35.999: INFO: Got endpoints: latency-svc-b9jzm [1.46373936s] Sep 4 13:44:36.053: INFO: Created: latency-svc-wk95f Sep 4 13:44:36.166: INFO: Got endpoints: latency-svc-wk95f [1.539169423s] Sep 4 13:44:36.169: INFO: Created: latency-svc-ddwf6 Sep 4 13:44:36.171: INFO: Got endpoints: latency-svc-ddwf6 [1.357272775s] Sep 4 13:44:36.209: INFO: Created: latency-svc-zldfw Sep 4 13:44:36.249: INFO: Got endpoints: latency-svc-zldfw [1.362561741s] Sep 4 13:44:36.399: INFO: Created: latency-svc-8g2sh Sep 4 13:44:36.429: INFO: Got endpoints: latency-svc-8g2sh [1.379961185s] Sep 4 13:44:36.454: INFO: Created: latency-svc-zzn95 Sep 4 13:44:36.471: INFO: Got endpoints: latency-svc-zzn95 [1.30874983s] Sep 4 13:44:36.496: INFO: Created: latency-svc-khlg2 Sep 4 13:44:36.542: INFO: Got endpoints: latency-svc-khlg2 [1.314909926s] Sep 4 13:44:36.570: INFO: Created: latency-svc-scc9p Sep 4 13:44:36.587: INFO: Got endpoints: latency-svc-scc9p [1.298442699s] Sep 4 13:44:36.705: INFO: Created: latency-svc-glsh6 Sep 4 13:44:36.750: INFO: Got endpoints: latency-svc-glsh6 [1.408349266s] Sep 4 13:44:36.754: INFO: Created: latency-svc-2n6tk Sep 4 13:44:36.872: INFO: Got endpoints: latency-svc-2n6tk [1.380227948s] Sep 4 13:44:36.874: INFO: Created: latency-svc-hf64h Sep 4 13:44:36.879: INFO: Got endpoints: latency-svc-hf64h [1.355718061s] Sep 4 13:44:36.903: INFO: Created: latency-svc-k9zww Sep 4 13:44:36.915: INFO: Got endpoints: latency-svc-k9zww [1.253268041s] Sep 4 13:44:36.935: INFO: Created: latency-svc-xjjxv Sep 4 13:44:36.952: INFO: Got endpoints: latency-svc-xjjxv [1.163007511s] Sep 4 13:44:37.046: INFO: Created: latency-svc-fqv8q Sep 4 13:44:37.078: INFO: Got endpoints: latency-svc-fqv8q [1.242778604s] Sep 4 13:44:37.222: INFO: Created: latency-svc-wrd29 Sep 4 13:44:37.251: INFO: Got endpoints: latency-svc-wrd29 [1.288927199s] Sep 4 13:44:37.253: INFO: Created: latency-svc-jf7wp Sep 4 13:44:37.278: INFO: Got endpoints: latency-svc-jf7wp [1.279166549s] Sep 4 13:44:37.314: INFO: Created: latency-svc-c8zxc Sep 4 13:44:37.387: INFO: Got endpoints: latency-svc-c8zxc [1.221112115s] Sep 4 13:44:37.399: INFO: Created: latency-svc-552tt Sep 4 13:44:37.404: INFO: Got endpoints: latency-svc-552tt [1.232837209s] Sep 4 13:44:37.538: INFO: Created: latency-svc-5ts7x Sep 4 13:44:37.554: INFO: Got endpoints: latency-svc-5ts7x [1.30524351s] Sep 4 13:44:37.596: INFO: Created: latency-svc-g2zkg Sep 4 13:44:37.615: INFO: Got endpoints: latency-svc-g2zkg [1.185358188s] Sep 4 13:44:37.636: INFO: Created: latency-svc-8zj2r Sep 4 13:44:37.705: INFO: Got endpoints: latency-svc-8zj2r [1.234295413s] Sep 4 13:44:37.708: INFO: Created: latency-svc-n74xn Sep 4 13:44:37.723: INFO: Got endpoints: latency-svc-n74xn [1.18071655s] Sep 4 13:44:37.752: INFO: Created: latency-svc-twrrc Sep 4 13:44:37.772: INFO: Got endpoints: latency-svc-twrrc [1.185563749s] Sep 4 13:44:37.800: INFO: Created: latency-svc-nwms6 Sep 4 13:44:37.836: INFO: Got endpoints: latency-svc-nwms6 [1.08555253s] Sep 4 13:44:37.852: INFO: Created: latency-svc-n84bl Sep 4 13:44:37.868: INFO: Got endpoints: latency-svc-n84bl [996.497272ms] Sep 4 13:44:37.894: INFO: Created: latency-svc-29b5q Sep 4 13:44:37.921: INFO: Got endpoints: latency-svc-29b5q [1.041171055s] Sep 4 13:44:38.017: INFO: Created: latency-svc-zndv7 Sep 4 13:44:38.023: INFO: Got endpoints: latency-svc-zndv7 [1.107336517s] Sep 4 13:44:38.178: INFO: Created: latency-svc-lnkns Sep 4 13:44:38.212: INFO: Got endpoints: latency-svc-lnkns [1.2604637s] Sep 4 13:44:38.244: INFO: Created: latency-svc-wstzg Sep 4 13:44:38.261: INFO: Got endpoints: latency-svc-wstzg [1.182778369s] Sep 4 13:44:38.358: INFO: Created: latency-svc-62fxp Sep 4 13:44:38.395: INFO: Created: latency-svc-2kdwf Sep 4 13:44:38.395: INFO: Got endpoints: latency-svc-62fxp [1.143499409s] Sep 4 13:44:38.424: INFO: Got endpoints: latency-svc-2kdwf [1.145287246s] Sep 4 13:44:38.502: INFO: Created: latency-svc-cj6br Sep 4 13:44:38.536: INFO: Created: latency-svc-dwkzz Sep 4 13:44:38.536: INFO: Got endpoints: latency-svc-cj6br [1.148810127s] Sep 4 13:44:38.551: INFO: Got endpoints: latency-svc-dwkzz [1.146643353s] Sep 4 13:44:38.578: INFO: Created: latency-svc-j2gmt Sep 4 13:44:38.587: INFO: Got endpoints: latency-svc-j2gmt [1.032586253s] Sep 4 13:44:38.651: INFO: Created: latency-svc-qxrsg Sep 4 13:44:38.698: INFO: Got endpoints: latency-svc-qxrsg [1.083028136s] Sep 4 13:44:38.702: INFO: Created: latency-svc-bsqg6 Sep 4 13:44:38.830: INFO: Got endpoints: latency-svc-bsqg6 [1.125020574s] Sep 4 13:44:38.841: INFO: Created: latency-svc-nr7zf Sep 4 13:44:38.850: INFO: Got endpoints: latency-svc-nr7zf [1.126325153s] Sep 4 13:44:38.873: INFO: Created: latency-svc-krkpp Sep 4 13:44:38.892: INFO: Got endpoints: latency-svc-krkpp [1.119406404s] Sep 4 13:44:38.968: INFO: Created: latency-svc-v54lx Sep 4 13:44:38.997: INFO: Got endpoints: latency-svc-v54lx [1.161326574s] Sep 4 13:44:38.998: INFO: Created: latency-svc-48qpc Sep 4 13:44:39.033: INFO: Got endpoints: latency-svc-48qpc [1.164839019s] Sep 4 13:44:39.066: INFO: Created: latency-svc-dh5q6 Sep 4 13:44:39.142: INFO: Got endpoints: latency-svc-dh5q6 [1.221008213s] Sep 4 13:44:39.146: INFO: Created: latency-svc-cw8bc Sep 4 13:44:39.158: INFO: Got endpoints: latency-svc-cw8bc [1.135155103s] Sep 4 13:44:39.183: INFO: Created: latency-svc-m5dh9 Sep 4 13:44:39.200: INFO: Got endpoints: latency-svc-m5dh9 [987.79208ms] Sep 4 13:44:39.225: INFO: Created: latency-svc-ftqgb Sep 4 13:44:39.321: INFO: Got endpoints: latency-svc-ftqgb [1.060821778s] Sep 4 13:44:39.330: INFO: Created: latency-svc-c7t4l Sep 4 13:44:39.339: INFO: Got endpoints: latency-svc-c7t4l [943.978005ms] Sep 4 13:44:39.364: INFO: Created: latency-svc-x85zp Sep 4 13:44:39.382: INFO: Got endpoints: latency-svc-x85zp [958.200873ms] Sep 4 13:44:39.418: INFO: Created: latency-svc-t5gbk Sep 4 13:44:39.459: INFO: Got endpoints: latency-svc-t5gbk [922.412144ms] Sep 4 13:44:39.467: INFO: Created: latency-svc-8nvs7 Sep 4 13:44:39.498: INFO: Got endpoints: latency-svc-8nvs7 [947.016815ms] Sep 4 13:44:39.534: INFO: Created: latency-svc-876nj Sep 4 13:44:39.545: INFO: Got endpoints: latency-svc-876nj [958.208416ms] Sep 4 13:44:39.627: INFO: Created: latency-svc-kqg9n Sep 4 13:44:39.631: INFO: Got endpoints: latency-svc-kqg9n [933.591584ms] Sep 4 13:44:39.670: INFO: Created: latency-svc-8ff57 Sep 4 13:44:39.697: INFO: Got endpoints: latency-svc-8ff57 [866.363741ms] Sep 4 13:44:39.771: INFO: Created: latency-svc-tgjvk Sep 4 13:44:39.779: INFO: Got endpoints: latency-svc-tgjvk [929.792118ms] Sep 4 13:44:39.849: INFO: Created: latency-svc-6kn7k Sep 4 13:44:39.956: INFO: Got endpoints: latency-svc-6kn7k [1.063864816s] Sep 4 13:44:39.959: INFO: Created: latency-svc-6nshj Sep 4 13:44:39.967: INFO: Got endpoints: latency-svc-6nshj [969.196075ms] Sep 4 13:44:39.993: INFO: Created: latency-svc-vhx5p Sep 4 13:44:40.010: INFO: Got endpoints: latency-svc-vhx5p [976.349362ms] Sep 4 13:44:40.035: INFO: Created: latency-svc-xt7wd Sep 4 13:44:40.111: INFO: Got endpoints: latency-svc-xt7wd [969.529199ms] Sep 4 13:44:40.146: INFO: Created: latency-svc-fg2rt Sep 4 13:44:40.188: INFO: Got endpoints: latency-svc-fg2rt [1.030292944s] Sep 4 13:44:40.246: INFO: Created: latency-svc-np575 Sep 4 13:44:40.265: INFO: Got endpoints: latency-svc-np575 [1.064286527s] Sep 4 13:44:40.294: INFO: Created: latency-svc-pjgnm Sep 4 13:44:40.319: INFO: Got endpoints: latency-svc-pjgnm [997.380237ms] Sep 4 13:44:40.400: INFO: Created: latency-svc-v4nbq Sep 4 13:44:40.409: INFO: Got endpoints: latency-svc-v4nbq [1.069936449s] Sep 4 13:44:40.457: INFO: Created: latency-svc-vgqf8 Sep 4 13:44:40.471: INFO: Got endpoints: latency-svc-vgqf8 [1.089346994s] Sep 4 13:44:40.564: INFO: Created: latency-svc-m5gtg Sep 4 13:44:40.590: INFO: Got endpoints: latency-svc-m5gtg [1.131178491s] Sep 4 13:44:40.625: INFO: Created: latency-svc-z7v4b Sep 4 13:44:40.729: INFO: Got endpoints: latency-svc-z7v4b [1.231242612s] Sep 4 13:44:40.763: INFO: Created: latency-svc-92t9k Sep 4 13:44:40.771: INFO: Got endpoints: latency-svc-92t9k [1.226410536s] Sep 4 13:44:40.799: INFO: Created: latency-svc-t88b6 Sep 4 13:44:40.819: INFO: Got endpoints: latency-svc-t88b6 [1.187837572s] Sep 4 13:44:40.883: INFO: Created: latency-svc-6hg4n Sep 4 13:44:40.897: INFO: Got endpoints: latency-svc-6hg4n [1.200396574s] Sep 4 13:44:40.966: INFO: Created: latency-svc-pj2g6 Sep 4 13:44:41.020: INFO: Got endpoints: latency-svc-pj2g6 [1.240204379s] Sep 4 13:44:41.070: INFO: Created: latency-svc-bcj7m Sep 4 13:44:41.176: INFO: Got endpoints: latency-svc-bcj7m [1.219902592s] Sep 4 13:44:41.214: INFO: Created: latency-svc-x68x2 Sep 4 13:44:41.241: INFO: Got endpoints: latency-svc-x68x2 [1.274705647s] Sep 4 13:44:41.268: INFO: Created: latency-svc-xnv85 Sep 4 13:44:41.328: INFO: Got endpoints: latency-svc-xnv85 [1.318095781s] Sep 4 13:44:41.331: INFO: Created: latency-svc-8gkk9 Sep 4 13:44:41.357: INFO: Got endpoints: latency-svc-8gkk9 [1.245340423s] Sep 4 13:44:41.391: INFO: Created: latency-svc-7lrl9 Sep 4 13:44:41.404: INFO: Got endpoints: latency-svc-7lrl9 [1.215745892s] Sep 4 13:44:41.404: INFO: Latencies: [139.808103ms 157.075944ms 256.022377ms 271.927735ms 423.392931ms 432.910795ms 475.89316ms 512.234115ms 576.377909ms 632.976857ms 753.289899ms 793.32074ms 811.212307ms 866.363741ms 922.412144ms 929.792118ms 933.41963ms 933.591584ms 935.695602ms 937.388894ms 937.552474ms 943.978005ms 944.377567ms 947.016815ms 952.745463ms 958.200873ms 958.208416ms 961.018456ms 967.716763ms 969.196075ms 969.529199ms 975.861239ms 976.349362ms 987.79208ms 996.497272ms 997.380237ms 1.013523607s 1.030292944s 1.032586253s 1.039169627s 1.041171055s 1.048258596s 1.04864108s 1.051241339s 1.051552569s 1.052645676s 1.055714275s 1.060821778s 1.061288475s 1.063864816s 1.064286527s 1.069936449s 1.072972995s 1.073949462s 1.075735953s 1.082984851s 1.083028136s 1.08555253s 1.087098487s 1.089346994s 1.101374483s 1.10266681s 1.106273616s 1.107336517s 1.113725519s 1.119406404s 1.125020574s 1.125027056s 1.126325153s 1.131178491s 1.135155103s 1.136002345s 1.139538924s 1.139810005s 1.143499409s 1.145287246s 1.145352073s 1.146643353s 1.148299906s 1.148705937s 1.148810127s 1.149926092s 1.152938117s 1.154972633s 1.1561847s 1.156544208s 1.158045357s 1.158267983s 1.161326574s 1.161958724s 1.1623739s 1.163007511s 1.164839019s 1.164879681s 1.165723553s 1.16804882s 1.168358886s 1.170339691s 1.170537121s 1.171596155s 1.1730153s 1.173596201s 1.173832869s 1.175181496s 1.178118521s 1.178878834s 1.179246839s 1.180217381s 1.18071655s 1.182778369s 1.184666769s 1.185358188s 1.185563749s 1.186347417s 1.186515425s 1.187837572s 1.188478363s 1.191938197s 1.195376604s 1.198478329s 1.198592718s 1.198956572s 1.199083623s 1.199896497s 1.200396574s 1.202622038s 1.202878437s 1.203328774s 1.203586338s 1.207784694s 1.210357947s 1.214747431s 1.215745892s 1.219280325s 1.219902592s 1.220051635s 1.221008213s 1.221112115s 1.221730497s 1.222265301s 1.223274735s 1.226410536s 1.227378325s 1.228132136s 1.231242612s 1.232837209s 1.234295413s 1.237750157s 1.238744455s 1.240204379s 1.24189547s 1.242778604s 1.245340423s 1.253092215s 1.253268041s 1.2583977s 1.259765859s 1.2604637s 1.260710043s 1.261193144s 1.26178918s 1.262246745s 1.262617146s 1.263830916s 1.263854414s 1.265854604s 1.274705647s 1.279166549s 1.284669616s 1.288927199s 1.291201596s 1.298442699s 1.30524351s 1.305358152s 1.30874983s 1.314909926s 1.318095781s 1.323452603s 1.328130311s 1.355718061s 1.357272775s 1.360373071s 1.3605081s 1.362561741s 1.379961185s 1.380227948s 1.403071179s 1.408349266s 1.436021612s 1.444555866s 1.450539977s 1.456293226s 1.460255862s 1.46373936s 1.505837242s 1.51720752s 1.536412379s 1.539169423s 1.557349579s 1.568781458s] Sep 4 13:44:41.404: INFO: 50 %ile: 1.1730153s Sep 4 13:44:41.404: INFO: 90 %ile: 1.357272775s Sep 4 13:44:41.404: INFO: 99 %ile: 1.557349579s Sep 4 13:44:41.404: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6770" for this suite. • [SLOW TEST:20.039 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":116,"skipped":1977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:41.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-713689e7-64ba-43ed-aba2-5ceceabb67bf STEP: Creating a pod to test consume secrets Sep 4 13:44:41.544: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b" in namespace "projected-9078" to be "Succeeded or Failed" Sep 4 13:44:41.601: INFO: Pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.506651ms Sep 4 13:44:43.606: INFO: Pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06202419s Sep 4 13:44:45.675: INFO: Pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b": Phase="Running", Reason="", readiness=true. Elapsed: 4.130861865s Sep 4 13:44:47.795: INFO: Pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251791213s STEP: Saw pod success Sep 4 13:44:47.796: INFO: Pod "pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b" satisfied condition "Succeeded or Failed" Sep 4 13:44:47.945: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b container projected-secret-volume-test: STEP: delete the pod Sep 4 13:44:48.111: INFO: Waiting for pod pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b to disappear Sep 4 13:44:48.145: INFO: Pod pod-projected-secrets-7bad70c4-64b8-4ef7-bf4a-12a939b13d6b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:48.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9078" for this suite. • [SLOW TEST:6.724 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":2021,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:48.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:44:48.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4489" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":118,"skipped":2042,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:44:48.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-sbwdz in namespace proxy-3193 I0904 13:44:48.973415 7 runners.go:190] Created replication controller with name: proxy-service-sbwdz, namespace: proxy-3193, replica count: 1 I0904 13:44:50.023774 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:51.024019 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:52.024303 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:53.024523 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:44:54.024846 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0904 13:44:55.025067 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0904 13:44:56.025257 7 runners.go:190] proxy-service-sbwdz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:44:56.063: INFO: setup took 7.236651555s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 4 13:44:56.177: INFO: (0) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 114.425108ms) Sep 4 13:44:56.178: INFO: (0) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 114.77108ms) Sep 4 13:44:56.178: INFO: (0) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 115.077888ms) Sep 4 13:44:56.178: INFO: (0) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 115.403719ms) Sep 4 13:44:56.179: INFO: (0) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 115.658706ms) Sep 4 13:44:56.179: INFO: (0) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 115.467576ms) Sep 4 13:44:56.183: INFO: (0) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 120.046717ms) Sep 4 13:44:56.185: INFO: (0) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 122.385833ms) Sep 4 13:44:56.185: INFO: (0) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 122.406921ms) Sep 4 13:44:56.186: INFO: (0) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test<... (200; 78.118911ms) Sep 4 13:44:56.299: INFO: (1) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 78.237711ms) Sep 4 13:44:56.299: INFO: (1) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 78.206601ms) Sep 4 13:44:56.299: INFO: (1) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 78.731464ms) Sep 4 13:44:56.300: INFO: (1) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 79.121047ms) Sep 4 13:44:56.301: INFO: (1) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 79.780988ms) Sep 4 13:44:56.301: INFO: (1) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 79.781606ms) Sep 4 13:44:56.301: INFO: (1) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 80.051403ms) Sep 4 13:44:56.301: INFO: (1) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 365.179123ms) Sep 4 13:44:56.737: INFO: (2) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 365.15191ms) Sep 4 13:44:56.987: INFO: (2) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 615.302897ms) Sep 4 13:44:56.987: INFO: (2) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 615.359829ms) Sep 4 13:44:56.988: INFO: (2) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 615.581137ms) Sep 4 13:44:56.988: INFO: (2) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 615.765455ms) Sep 4 13:44:56.988: INFO: (2) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 90.872318ms) Sep 4 13:44:57.168: INFO: (3) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 90.987718ms) Sep 4 13:44:57.169: INFO: (3) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 92.186642ms) Sep 4 13:44:57.170: INFO: (3) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 92.25034ms) Sep 4 13:44:57.258: INFO: (3) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 180.127268ms) Sep 4 13:44:57.258: INFO: (3) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 180.119929ms) Sep 4 13:44:57.258: INFO: (3) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 180.140007ms) Sep 4 13:44:57.707: INFO: (4) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 449.04162ms) Sep 4 13:44:57.707: INFO: (4) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 449.636576ms) Sep 4 13:44:57.707: INFO: (4) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 449.525008ms) Sep 4 13:44:57.708: INFO: (4) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 449.580642ms) Sep 4 13:44:57.708: INFO: (4) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 449.715811ms) Sep 4 13:44:57.708: INFO: (4) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 449.544531ms) Sep 4 13:44:57.708: INFO: (4) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 449.590091ms) Sep 4 13:44:57.708: INFO: (4) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 449.620915ms) Sep 4 13:44:57.709: INFO: (4) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 450.970276ms) Sep 4 13:44:57.709: INFO: (4) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 34.259838ms) Sep 4 13:44:57.797: INFO: (5) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 34.670427ms) Sep 4 13:44:57.797: INFO: (5) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 34.697156ms) Sep 4 13:44:57.798: INFO: (5) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 35.513197ms) Sep 4 13:44:57.798: INFO: (5) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 35.484825ms) Sep 4 13:44:57.799: INFO: (5) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 35.887118ms) Sep 4 13:44:57.799: INFO: (5) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 27.27328ms) Sep 4 13:44:57.962: INFO: (6) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 27.248724ms) Sep 4 13:44:57.962: INFO: (6) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 27.316423ms) Sep 4 13:44:57.963: INFO: (6) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 27.516089ms) Sep 4 13:44:57.963: INFO: (6) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 27.560551ms) Sep 4 13:44:57.963: INFO: (6) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 27.567384ms) Sep 4 13:44:57.963: INFO: (6) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 27.630919ms) Sep 4 13:44:57.963: INFO: (6) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 27.671634ms) Sep 4 13:44:58.042: INFO: (6) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 107.208247ms) Sep 4 13:44:58.043: INFO: (6) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 107.598927ms) Sep 4 13:44:58.043: INFO: (6) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 107.590765ms) Sep 4 13:44:58.043: INFO: (6) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 107.660212ms) Sep 4 13:44:58.043: INFO: (6) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 107.760057ms) Sep 4 13:44:58.043: INFO: (6) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 107.719463ms) Sep 4 13:44:58.054: INFO: (7) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 11.277574ms) Sep 4 13:44:58.054: INFO: (7) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 11.251185ms) Sep 4 13:44:58.054: INFO: (7) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 11.395402ms) Sep 4 13:44:58.055: INFO: (7) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 11.979572ms) Sep 4 13:44:58.055: INFO: (7) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 11.931668ms) Sep 4 13:44:58.055: INFO: (7) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 12.264738ms) Sep 4 13:44:58.056: INFO: (7) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 12.830363ms) Sep 4 13:44:58.056: INFO: (7) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 13.388474ms) Sep 4 13:44:58.056: INFO: (7) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 13.411987ms) Sep 4 13:44:58.097: INFO: (7) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 53.568559ms) Sep 4 13:44:58.171: INFO: (7) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 127.681576ms) Sep 4 13:44:58.171: INFO: (7) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 127.561043ms) Sep 4 13:44:58.196: INFO: (8) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 25.75342ms) Sep 4 13:44:58.197: INFO: (8) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 25.799384ms) Sep 4 13:44:58.197: INFO: (8) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 26.027502ms) Sep 4 13:44:58.197: INFO: (8) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 28.641631ms) Sep 4 13:44:58.199: INFO: (8) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 28.5398ms) Sep 4 13:44:58.199: INFO: (8) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 28.597636ms) Sep 4 13:44:58.200: INFO: (8) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 29.000034ms) Sep 4 13:44:58.218: INFO: (8) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 47.700101ms) Sep 4 13:44:58.218: INFO: (8) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 47.680674ms) Sep 4 13:44:58.218: INFO: (8) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 47.778731ms) Sep 4 13:44:58.218: INFO: (8) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 47.717484ms) Sep 4 13:44:58.218: INFO: (8) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 47.775086ms) Sep 4 13:44:58.219: INFO: (8) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 47.962811ms) Sep 4 13:44:58.238: INFO: (9) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 18.758024ms) Sep 4 13:44:58.335: INFO: (9) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 116.82247ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 116.892439ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 116.868053ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 116.940569ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 117.133412ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 117.464332ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 117.56475ms) Sep 4 13:44:58.336: INFO: (9) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 117.593937ms) Sep 4 13:44:58.355: INFO: (9) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 136.338304ms) Sep 4 13:44:58.355: INFO: (9) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 136.25809ms) Sep 4 13:44:58.355: INFO: (9) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 136.546885ms) Sep 4 13:44:58.355: INFO: (9) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 136.532673ms) Sep 4 13:44:58.355: INFO: (9) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 136.605591ms) Sep 4 13:44:58.356: INFO: (9) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 136.739777ms) Sep 4 13:44:58.381: INFO: (10) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 23.769574ms) Sep 4 13:44:58.381: INFO: (10) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test<... (200; 59.637262ms) Sep 4 13:44:58.417: INFO: (10) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 59.55278ms) Sep 4 13:44:58.417: INFO: (10) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 60.002361ms) Sep 4 13:44:58.417: INFO: (10) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 60.050914ms) Sep 4 13:44:58.418: INFO: (10) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 60.9485ms) Sep 4 13:44:58.433: INFO: (10) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 76.266977ms) Sep 4 13:44:58.433: INFO: (10) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 76.649941ms) Sep 4 13:44:58.433: INFO: (10) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 76.783043ms) Sep 4 13:44:58.459: INFO: (10) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 102.269177ms) Sep 4 13:44:58.459: INFO: (10) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 102.228269ms) Sep 4 13:44:58.459: INFO: (10) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 102.614316ms) Sep 4 13:44:58.468: INFO: (11) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 8.926363ms) Sep 4 13:44:58.469: INFO: (11) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 9.777368ms) Sep 4 13:44:58.469: INFO: (11) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 9.785648ms) Sep 4 13:44:58.469: INFO: (11) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 9.796165ms) Sep 4 13:44:58.474: INFO: (11) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 14.068288ms) Sep 4 13:44:58.474: INFO: (11) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 14.107634ms) Sep 4 13:44:58.474: INFO: (11) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 14.15954ms) Sep 4 13:44:58.474: INFO: (11) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 14.264734ms) Sep 4 13:44:58.474: INFO: (11) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 14.15511ms) Sep 4 13:44:58.486: INFO: (11) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 26.781728ms) Sep 4 13:44:58.504: INFO: (11) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 44.094918ms) Sep 4 13:44:58.504: INFO: (11) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 44.000345ms) Sep 4 13:44:58.504: INFO: (11) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 44.029366ms) Sep 4 13:44:58.504: INFO: (11) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 44.17542ms) Sep 4 13:44:58.504: INFO: (11) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 44.822666ms) Sep 4 13:44:58.527: INFO: (12) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 22.044765ms) Sep 4 13:44:58.527: INFO: (12) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 22.000984ms) Sep 4 13:44:58.528: INFO: (12) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 23.83336ms) Sep 4 13:44:58.528: INFO: (12) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test<... (200; 23.942505ms) Sep 4 13:44:58.528: INFO: (12) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 24.05327ms) Sep 4 13:44:58.529: INFO: (12) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 23.936392ms) Sep 4 13:44:58.529: INFO: (12) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 24.214304ms) Sep 4 13:44:58.531: INFO: (12) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 26.464828ms) Sep 4 13:44:58.532: INFO: (12) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 27.318819ms) Sep 4 13:44:58.532: INFO: (12) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 27.308154ms) Sep 4 13:44:58.532: INFO: (12) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 27.532641ms) Sep 4 13:44:58.532: INFO: (12) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 27.511694ms) Sep 4 13:44:58.532: INFO: (12) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 27.378289ms) Sep 4 13:44:58.611: INFO: (13) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 79.010484ms) Sep 4 13:44:58.611: INFO: (13) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 78.813673ms) Sep 4 13:44:58.611: INFO: (13) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 79.055514ms) Sep 4 13:44:58.611: INFO: (13) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 78.937837ms) Sep 4 13:44:58.612: INFO: (13) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 79.408925ms) Sep 4 13:44:58.612: INFO: (13) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 79.63438ms) Sep 4 13:44:58.612: INFO: (13) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 79.765504ms) Sep 4 13:44:58.613: INFO: (13) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 80.645889ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 81.515856ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 81.289563ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 81.708475ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 81.785508ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 82.234242ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 81.958373ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 82.337211ms) Sep 4 13:44:58.614: INFO: (13) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 31.033332ms) Sep 4 13:44:58.646: INFO: (14) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 31.01724ms) Sep 4 13:44:58.646: INFO: (14) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test<... (200; 31.371067ms) Sep 4 13:44:58.646: INFO: (14) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 31.373609ms) Sep 4 13:44:58.646: INFO: (14) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 31.663095ms) Sep 4 13:44:58.646: INFO: (14) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 31.708974ms) Sep 4 13:44:58.648: INFO: (14) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 33.626793ms) Sep 4 13:44:58.648: INFO: (14) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 33.782247ms) Sep 4 13:44:58.649: INFO: (14) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 34.334449ms) Sep 4 13:44:58.649: INFO: (14) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 34.411448ms) Sep 4 13:44:58.649: INFO: (14) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 34.446986ms) Sep 4 13:44:58.650: INFO: (14) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 34.940611ms) Sep 4 13:44:58.654: INFO: (15) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 4.418942ms) Sep 4 13:44:58.685: INFO: (15) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 35.691372ms) Sep 4 13:44:58.685: INFO: (15) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 35.81958ms) Sep 4 13:44:58.685: INFO: (15) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 35.715105ms) Sep 4 13:44:58.686: INFO: (15) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 35.799864ms) Sep 4 13:44:58.686: INFO: (15) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 12.762916ms) Sep 4 13:44:58.763: INFO: (16) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 12.531145ms) Sep 4 13:44:58.763: INFO: (16) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 12.707819ms) Sep 4 13:44:58.763: INFO: (16) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 12.804115ms) Sep 4 13:44:58.764: INFO: (16) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 13.076467ms) Sep 4 13:44:58.764: INFO: (16) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 13.567106ms) Sep 4 13:44:58.764: INFO: (16) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 13.164784ms) Sep 4 13:44:58.764: INFO: (16) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 14.041062ms) Sep 4 13:44:58.765: INFO: (16) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 14.665241ms) Sep 4 13:44:58.765: INFO: (16) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname2/proxy/: bar (200; 14.667272ms) Sep 4 13:44:58.765: INFO: (16) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 14.835074ms) Sep 4 13:44:58.765: INFO: (16) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 14.934853ms) Sep 4 13:44:58.765: INFO: (16) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: test (200; 8.002853ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 8.125023ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 8.090392ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 8.132855ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 8.095561ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 8.161068ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 8.27448ms) Sep 4 13:44:58.789: INFO: (17) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 8.244801ms) Sep 4 13:44:58.790: INFO: (17) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 9.042702ms) Sep 4 13:44:58.790: INFO: (17) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 9.052694ms) Sep 4 13:44:58.790: INFO: (17) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 9.196585ms) Sep 4 13:44:58.815: INFO: (17) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 34.375696ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 70.163479ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:1080/proxy/: ... (200; 70.221852ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 70.209049ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 70.499144ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 70.426502ms) Sep 4 13:44:58.886: INFO: (18) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 70.701445ms) Sep 4 13:44:58.887: INFO: (18) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 71.901384ms) Sep 4 13:44:58.888: INFO: (18) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: ... (200; 11.550981ms) Sep 4 13:44:58.914: INFO: (19) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp:1080/proxy/: test<... (200; 12.136083ms) Sep 4 13:44:58.914: INFO: (19) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:462/proxy/: tls qux (200; 12.462564ms) Sep 4 13:44:58.914: INFO: (19) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:460/proxy/: tls baz (200; 12.498091ms) Sep 4 13:44:58.914: INFO: (19) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:160/proxy/: foo (200; 12.481055ms) Sep 4 13:44:58.914: INFO: (19) /api/v1/namespaces/proxy-3193/pods/proxy-service-sbwdz-vrtnp/proxy/: test (200; 12.626289ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname2/proxy/: bar (200; 13.422753ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/services/http:proxy-service-sbwdz:portname1/proxy/: foo (200; 13.617346ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/services/proxy-service-sbwdz:portname1/proxy/: foo (200; 13.762859ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname2/proxy/: tls qux (200; 13.875216ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/services/https:proxy-service-sbwdz:tlsportname1/proxy/: tls baz (200; 13.825854ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/pods/http:proxy-service-sbwdz-vrtnp:162/proxy/: bar (200; 13.890718ms) Sep 4 13:44:58.915: INFO: (19) /api/v1/namespaces/proxy-3193/pods/https:proxy-service-sbwdz-vrtnp:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-3036b403-1a1f-4b3a-8f26-17418a3890de STEP: Creating a pod to test consume configMaps Sep 4 13:45:10.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9" in namespace "projected-7687" to be "Succeeded or Failed" Sep 4 13:45:10.550: INFO: Pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.368253ms Sep 4 13:45:12.713: INFO: Pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186603095s Sep 4 13:45:14.716: INFO: Pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189394122s Sep 4 13:45:16.753: INFO: Pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227021019s STEP: Saw pod success Sep 4 13:45:16.753: INFO: Pod "pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9" satisfied condition "Succeeded or Failed" Sep 4 13:45:16.786: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9 container projected-configmap-volume-test: STEP: delete the pod Sep 4 13:45:16.969: INFO: Waiting for pod pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9 to disappear Sep 4 13:45:17.040: INFO: Pod pod-projected-configmaps-01db378f-14af-49a5-80c6-3700b28c7bf9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:17.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7687" for this suite. • [SLOW TEST:6.904 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":2084,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:17.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 4 13:45:17.290: INFO: Waiting up to 5m0s for pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f" in namespace "emptydir-208" to be "Succeeded or Failed" Sep 4 13:45:17.384: INFO: Pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 93.248791ms Sep 4 13:45:19.397: INFO: Pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106731167s Sep 4 13:45:21.622: INFO: Pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.332006307s Sep 4 13:45:23.645: INFO: Pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354744668s STEP: Saw pod success Sep 4 13:45:23.645: INFO: Pod "pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f" satisfied condition "Succeeded or Failed" Sep 4 13:45:23.679: INFO: Trying to get logs from node latest-worker pod pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f container test-container: STEP: delete the pod Sep 4 13:45:23.848: INFO: Waiting for pod pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f to disappear Sep 4 13:45:23.914: INFO: Pod pod-86dabe7f-4327-4779-ac8b-3cc561c8dc5f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:23.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-208" for this suite. • [SLOW TEST:6.860 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":2104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:23.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:45:24.185: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cce20487-a0ad-46a8-93ad-e7750b9cdf9d", Controller:(*bool)(0xc004ca4e52), BlockOwnerDeletion:(*bool)(0xc004ca4e53)}} Sep 4 13:45:24.198: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1da9ac62-6be8-4062-a09d-b88b6fd13f72", Controller:(*bool)(0xc004bf4092), BlockOwnerDeletion:(*bool)(0xc004bf4093)}} Sep 4 13:45:24.215: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"020d3adc-84d9-4be9-b1da-9645b9f46988", Controller:(*bool)(0xc004b51f8a), BlockOwnerDeletion:(*bool)(0xc004b51f8b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:29.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3821" for this suite. • [SLOW TEST:5.481 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":122,"skipped":2154,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:29.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:45:30.412: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:45:32.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823930, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823930, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823930, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734823930, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:45:35.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:35.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-573" for this suite. STEP: Destroying namespace "webhook-573-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.873 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":123,"skipped":2171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:36.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 4 13:45:36.746: INFO: Waiting up to 5m0s for pod "client-containers-53b0f8e1-4908-4824-8394-11225070d8c6" in namespace "containers-9801" to be "Succeeded or Failed" Sep 4 13:45:36.963: INFO: Pod "client-containers-53b0f8e1-4908-4824-8394-11225070d8c6": Phase="Pending", Reason="", readiness=false. Elapsed: 216.419518ms Sep 4 13:45:38.967: INFO: Pod "client-containers-53b0f8e1-4908-4824-8394-11225070d8c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22061936s Sep 4 13:45:41.000: INFO: Pod "client-containers-53b0f8e1-4908-4824-8394-11225070d8c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253805767s STEP: Saw pod success Sep 4 13:45:41.000: INFO: Pod "client-containers-53b0f8e1-4908-4824-8394-11225070d8c6" satisfied condition "Succeeded or Failed" Sep 4 13:45:41.003: INFO: Trying to get logs from node latest-worker2 pod client-containers-53b0f8e1-4908-4824-8394-11225070d8c6 container test-container: STEP: delete the pod Sep 4 13:45:41.326: INFO: Waiting for pod client-containers-53b0f8e1-4908-4824-8394-11225070d8c6 to disappear Sep 4 13:45:41.367: INFO: Pod client-containers-53b0f8e1-4908-4824-8394-11225070d8c6 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:41.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9801" for this suite. • [SLOW TEST:5.137 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":2194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:41.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-91407fe3-a7b2-44cb-af7c-80b778e1896d STEP: Creating secret with name secret-projected-all-test-volume-2842f466-9eb9-40fb-a973-60c213f8abb1 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 4 13:45:41.573: INFO: Waiting up to 5m0s for pod "projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632" in namespace "projected-6083" to be "Succeeded or Failed" Sep 4 13:45:41.577: INFO: Pod "projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632": Phase="Pending", Reason="", readiness=false. Elapsed: 3.016325ms Sep 4 13:45:43.579: INFO: Pod "projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005871107s Sep 4 13:45:45.583: INFO: Pod "projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009271953s STEP: Saw pod success Sep 4 13:45:45.583: INFO: Pod "projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632" satisfied condition "Succeeded or Failed" Sep 4 13:45:45.585: INFO: Trying to get logs from node latest-worker pod projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632 container projected-all-volume-test: STEP: delete the pod Sep 4 13:45:45.621: INFO: Waiting for pod projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632 to disappear Sep 4 13:45:45.635: INFO: Pod projected-volume-92162a52-244d-4d46-861f-b46e7d5e4632 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:45.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6083" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:45.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:51.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6654" for this suite. • [SLOW TEST:5.645 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":126,"skipped":2262,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:51.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4277/secret-test-aa0aed57-4a4f-452e-afd7-7567f47673eb STEP: Creating a pod to test consume secrets Sep 4 13:45:51.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1" in namespace "secrets-4277" to be "Succeeded or Failed" Sep 4 13:45:51.499: INFO: Pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.620308ms Sep 4 13:45:53.580: INFO: Pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093433553s Sep 4 13:45:55.584: INFO: Pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1": Phase="Running", Reason="", readiness=true. Elapsed: 4.097540066s Sep 4 13:45:57.588: INFO: Pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101670328s STEP: Saw pod success Sep 4 13:45:57.588: INFO: Pod "pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1" satisfied condition "Succeeded or Failed" Sep 4 13:45:57.591: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1 container env-test: STEP: delete the pod Sep 4 13:45:57.665: INFO: Waiting for pod pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1 to disappear Sep 4 13:45:57.673: INFO: Pod pod-configmaps-ad769cff-4e0f-437e-a8f6-364deda023d1 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:57.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4277" for this suite. • [SLOW TEST:6.365 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":127,"skipped":2262,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:57.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:45:57.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6042" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":128,"skipped":2281,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:45:57.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 4 13:46:02.009: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5889 PodName:var-expansion-666cf439-563d-43ca-ad89-b745e2a226b8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:46:02.009: INFO: >>> kubeConfig: /root/.kube/config I0904 13:46:02.048420 7 log.go:181] (0xc001efe6e0) (0xc000d0fb80) Create stream I0904 13:46:02.048453 7 log.go:181] (0xc001efe6e0) (0xc000d0fb80) Stream added, broadcasting: 1 I0904 13:46:02.050901 7 log.go:181] (0xc001efe6e0) Reply frame received for 1 I0904 13:46:02.050958 7 log.go:181] (0xc001efe6e0) (0xc0003480a0) Create stream I0904 13:46:02.050984 7 log.go:181] (0xc001efe6e0) (0xc0003480a0) Stream added, broadcasting: 3 I0904 13:46:02.052045 7 log.go:181] (0xc001efe6e0) Reply frame received for 3 I0904 13:46:02.052098 7 log.go:181] (0xc001efe6e0) (0xc002c5c3c0) Create stream I0904 13:46:02.052114 7 log.go:181] (0xc001efe6e0) (0xc002c5c3c0) Stream added, broadcasting: 5 I0904 13:46:02.053265 7 log.go:181] (0xc001efe6e0) Reply frame received for 5 I0904 13:46:02.127329 7 log.go:181] (0xc001efe6e0) Data frame received for 3 I0904 13:46:02.127366 7 log.go:181] (0xc0003480a0) (3) Data frame handling I0904 13:46:02.127400 7 log.go:181] (0xc001efe6e0) Data frame received for 5 I0904 13:46:02.127432 7 log.go:181] (0xc002c5c3c0) (5) Data frame handling I0904 13:46:02.128702 7 log.go:181] (0xc001efe6e0) Data frame received for 1 I0904 13:46:02.128717 7 log.go:181] (0xc000d0fb80) (1) Data frame handling I0904 13:46:02.128819 7 log.go:181] (0xc000d0fb80) (1) Data frame sent I0904 13:46:02.128839 7 log.go:181] (0xc001efe6e0) (0xc000d0fb80) Stream removed, broadcasting: 1 I0904 13:46:02.128921 7 log.go:181] (0xc001efe6e0) (0xc000d0fb80) Stream removed, broadcasting: 1 I0904 13:46:02.128936 7 log.go:181] (0xc001efe6e0) (0xc0003480a0) Stream removed, broadcasting: 3 I0904 13:46:02.129139 7 log.go:181] (0xc001efe6e0) (0xc002c5c3c0) Stream removed, broadcasting: 5 I0904 13:46:02.129209 7 log.go:181] (0xc001efe6e0) Go away received STEP: test for file in mounted path Sep 4 13:46:02.132: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5889 PodName:var-expansion-666cf439-563d-43ca-ad89-b745e2a226b8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:46:02.132: INFO: >>> kubeConfig: /root/.kube/config I0904 13:46:02.162277 7 log.go:181] (0xc0065406e0) (0xc002129900) Create stream I0904 13:46:02.162318 7 log.go:181] (0xc0065406e0) (0xc002129900) Stream added, broadcasting: 1 I0904 13:46:02.164300 7 log.go:181] (0xc0065406e0) Reply frame received for 1 I0904 13:46:02.164354 7 log.go:181] (0xc0065406e0) (0xc0021299a0) Create stream I0904 13:46:02.164368 7 log.go:181] (0xc0065406e0) (0xc0021299a0) Stream added, broadcasting: 3 I0904 13:46:02.165570 7 log.go:181] (0xc0065406e0) Reply frame received for 3 I0904 13:46:02.165606 7 log.go:181] (0xc0065406e0) (0xc000d0fc20) Create stream I0904 13:46:02.165622 7 log.go:181] (0xc0065406e0) (0xc000d0fc20) Stream added, broadcasting: 5 I0904 13:46:02.167881 7 log.go:181] (0xc0065406e0) Reply frame received for 5 I0904 13:46:02.250418 7 log.go:181] (0xc0065406e0) Data frame received for 3 I0904 13:46:02.250458 7 log.go:181] (0xc0021299a0) (3) Data frame handling I0904 13:46:02.250490 7 log.go:181] (0xc0065406e0) Data frame received for 5 I0904 13:46:02.250512 7 log.go:181] (0xc000d0fc20) (5) Data frame handling I0904 13:46:02.251852 7 log.go:181] (0xc0065406e0) Data frame received for 1 I0904 13:46:02.251923 7 log.go:181] (0xc002129900) (1) Data frame handling I0904 13:46:02.251975 7 log.go:181] (0xc002129900) (1) Data frame sent I0904 13:46:02.252008 7 log.go:181] (0xc0065406e0) (0xc002129900) Stream removed, broadcasting: 1 I0904 13:46:02.252079 7 log.go:181] (0xc0065406e0) Go away received I0904 13:46:02.252144 7 log.go:181] (0xc0065406e0) (0xc002129900) Stream removed, broadcasting: 1 I0904 13:46:02.252160 7 log.go:181] (0xc0065406e0) (0xc0021299a0) Stream removed, broadcasting: 3 I0904 13:46:02.252169 7 log.go:181] (0xc0065406e0) (0xc000d0fc20) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 4 13:46:02.808: INFO: Successfully updated pod "var-expansion-666cf439-563d-43ca-ad89-b745e2a226b8" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 4 13:46:02.811: INFO: Deleting pod "var-expansion-666cf439-563d-43ca-ad89-b745e2a226b8" in namespace "var-expansion-5889" Sep 4 13:46:02.834: INFO: Wait up to 5m0s for pod "var-expansion-666cf439-563d-43ca-ad89-b745e2a226b8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:46:50.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5889" for this suite. • [SLOW TEST:52.993 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":129,"skipped":2292,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:46:50.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ea13f99c-ec98-4e2b-8cd6-cbf517d6b49c STEP: Creating a pod to test consume secrets Sep 4 13:46:51.083: INFO: Waiting up to 5m0s for pod "pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894" in namespace "secrets-2437" to be "Succeeded or Failed" Sep 4 13:46:51.131: INFO: Pod "pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894": Phase="Pending", Reason="", readiness=false. Elapsed: 47.675287ms Sep 4 13:46:53.574: INFO: Pod "pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490752707s Sep 4 13:46:55.577: INFO: Pod "pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.493858419s STEP: Saw pod success Sep 4 13:46:55.577: INFO: Pod "pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894" satisfied condition "Succeeded or Failed" Sep 4 13:46:55.580: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894 container secret-volume-test: STEP: delete the pod Sep 4 13:46:55.671: INFO: Waiting for pod pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894 to disappear Sep 4 13:46:55.685: INFO: Pod pod-secrets-694b16ac-ed3d-4748-b426-5bc2e9d7d894 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:46:55.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2437" for this suite. STEP: Destroying namespace "secret-namespace-8253" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":2312,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:46:55.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 4 13:47:00.091: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:00.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6988" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2331,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:00.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 13:47:00.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a" in namespace "projected-8321" to be "Succeeded or Failed" Sep 4 13:47:00.713: INFO: Pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.38492ms Sep 4 13:47:02.718: INFO: Pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035680224s Sep 4 13:47:04.721: INFO: Pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a": Phase="Running", Reason="", readiness=true. Elapsed: 4.0393283s Sep 4 13:47:06.726: INFO: Pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044092328s STEP: Saw pod success Sep 4 13:47:06.726: INFO: Pod "downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a" satisfied condition "Succeeded or Failed" Sep 4 13:47:06.728: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a container client-container: STEP: delete the pod Sep 4 13:47:06.769: INFO: Waiting for pod downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a to disappear Sep 4 13:47:06.778: INFO: Pod downwardapi-volume-a347808a-01d6-4442-940b-d33d3588ad2a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:06.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8321" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":2337,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:06.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 4 13:47:06.900: INFO: Major version: 1 STEP: Confirm minor version Sep 4 13:47:06.900: INFO: cleanMinorVersion: 19 Sep 4 13:47:06.900: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:06.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5886" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":133,"skipped":2347,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:07.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 4 13:47:07.096: INFO: Waiting up to 5m0s for pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16" in namespace "emptydir-9002" to be "Succeeded or Failed" Sep 4 13:47:07.099: INFO: Pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074487ms Sep 4 13:47:09.104: INFO: Pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007395958s Sep 4 13:47:11.108: INFO: Pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012040672s Sep 4 13:47:13.173: INFO: Pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076259581s STEP: Saw pod success Sep 4 13:47:13.173: INFO: Pod "pod-220557c6-f284-45d3-b5ae-52aada7d1b16" satisfied condition "Succeeded or Failed" Sep 4 13:47:13.175: INFO: Trying to get logs from node latest-worker pod pod-220557c6-f284-45d3-b5ae-52aada7d1b16 container test-container: STEP: delete the pod Sep 4 13:47:13.299: INFO: Waiting for pod pod-220557c6-f284-45d3-b5ae-52aada7d1b16 to disappear Sep 4 13:47:13.327: INFO: Pod pod-220557c6-f284-45d3-b5ae-52aada7d1b16 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:13.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9002" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":2356,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:13.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 4 13:47:13.473: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 4 13:47:13.501: INFO: Waiting for terminating namespaces to be deleted... Sep 4 13:47:13.504: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 4 13:47:13.513: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container app ready: true, restart count 0 Sep 4 13:47:13.513: INFO: daemon-set-ff4l6 from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container app ready: true, restart count 0 Sep 4 13:47:13.513: INFO: live6 from default started at 2020-08-30 11:51:51 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container live6 ready: false, restart count 0 Sep 4 13:47:13.513: INFO: test-recreate-deployment-f79dd4667-n4rtn from deployment-6445 started at 2020-08-28 02:33:33 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container httpd ready: true, restart count 0 Sep 4 13:47:13.513: INFO: bono-7b5b98574f-j2wlq from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.513: INFO: Container bono ready: true, restart count 0 Sep 4 13:47:13.513: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.513: INFO: chronos-678bcff97d-665n9 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.513: INFO: Container chronos ready: true, restart count 0 Sep 4 13:47:13.513: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.513: INFO: homer-6d85c54796-5grhn from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container homer ready: true, restart count 0 Sep 4 13:47:13.513: INFO: homestead-prov-54ddb995c5-phmgj from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container homestead-prov ready: true, restart count 0 Sep 4 13:47:13.513: INFO: live-test from ims-fqddr started at 2020-08-30 10:33:20 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container live-test ready: false, restart count 0 Sep 4 13:47:13.513: INFO: ralf-645db98795-l7gpf from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.513: INFO: Container ralf ready: true, restart count 0 Sep 4 13:47:13.513: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.513: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:47:13.513: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.513: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 13:47:13.513: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 4 13:47:13.639: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container app ready: true, restart count 0 Sep 4 13:47:13.639: INFO: daemon-set-6qbhl from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container app ready: true, restart count 0 Sep 4 13:47:13.639: INFO: live3 from default started at 2020-08-30 11:14:22 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container live3 ready: false, restart count 0 Sep 4 13:47:13.639: INFO: live4 from default started at 2020-08-30 11:19:29 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container live4 ready: false, restart count 0 Sep 4 13:47:13.639: INFO: live5 from default started at 2020-08-30 11:22:52 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container live5 ready: false, restart count 0 Sep 4 13:47:13.639: INFO: astaire-66c5667484-7s6hd from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.639: INFO: Container astaire ready: true, restart count 0 Sep 4 13:47:13.639: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.639: INFO: cassandra-bf5b4886d-w9qkb from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container cassandra ready: true, restart count 0 Sep 4 13:47:13.639: INFO: ellis-668f49999b-84cll from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container ellis ready: true, restart count 0 Sep 4 13:47:13.639: INFO: etcd-744b4d9f98-5bm8d from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container etcd ready: true, restart count 0 Sep 4 13:47:13.639: INFO: homestead-59959889bd-dh787 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.639: INFO: Container homestead ready: true, restart count 0 Sep 4 13:47:13.639: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.639: INFO: sprout-b4bbc5c49-m9nqx from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 13:47:13.639: INFO: Container sprout ready: true, restart count 0 Sep 4 13:47:13.639: INFO: Container tailer ready: true, restart count 0 Sep 4 13:47:13.639: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 13:47:13.639: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Sep 4 13:47:13.639: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6b3e5c94-8658-49db-b500-c30252b8cd9c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6b3e5c94-8658-49db-b500-c30252b8cd9c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6b3e5c94-8658-49db-b500-c30252b8cd9c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:23.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1313" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.532 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":135,"skipped":2373,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:23.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-755 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-755 STEP: creating replication controller externalsvc in namespace services-755 I0904 13:47:24.180372 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-755, replica count: 2 I0904 13:47:27.230755 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:47:30.230959 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 4 13:47:30.517: INFO: Creating new exec pod Sep 4 13:47:34.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-755 execpodnmb6v -- /bin/sh -x -c nslookup nodeport-service.services-755.svc.cluster.local' Sep 4 13:47:35.112: INFO: stderr: "I0904 13:47:35.014369 1732 log.go:181] (0xc00003a0b0) (0xc000d06000) Create stream\nI0904 13:47:35.014567 1732 log.go:181] (0xc00003a0b0) (0xc000d06000) Stream added, broadcasting: 1\nI0904 13:47:35.016619 1732 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0904 13:47:35.016686 1732 log.go:181] (0xc00003a0b0) (0xc000c66000) Create stream\nI0904 13:47:35.016703 1732 log.go:181] (0xc00003a0b0) (0xc000c66000) Stream added, broadcasting: 3\nI0904 13:47:35.017920 1732 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0904 13:47:35.017965 1732 log.go:181] (0xc00003a0b0) (0xc0007ac000) Create stream\nI0904 13:47:35.017984 1732 log.go:181] (0xc00003a0b0) (0xc0007ac000) Stream added, broadcasting: 5\nI0904 13:47:35.019087 1732 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0904 13:47:35.087930 1732 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:47:35.087957 1732 log.go:181] (0xc0007ac000) (5) Data frame handling\nI0904 13:47:35.087972 1732 log.go:181] (0xc0007ac000) (5) Data frame sent\n+ nslookup nodeport-service.services-755.svc.cluster.local\nI0904 13:47:35.096251 1732 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:47:35.096286 1732 log.go:181] (0xc000c66000) (3) Data frame handling\nI0904 13:47:35.096315 1732 log.go:181] (0xc000c66000) (3) Data frame sent\nI0904 13:47:35.097727 1732 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:47:35.097760 1732 log.go:181] (0xc000c66000) (3) Data frame handling\nI0904 13:47:35.097782 1732 log.go:181] (0xc000c66000) (3) Data frame sent\nI0904 13:47:35.098809 1732 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 13:47:35.098834 1732 log.go:181] (0xc000c66000) (3) Data frame handling\nI0904 13:47:35.098915 1732 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 13:47:35.098938 1732 log.go:181] (0xc0007ac000) (5) Data frame handling\nI0904 13:47:35.100621 1732 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0904 13:47:35.100648 1732 log.go:181] (0xc000d06000) (1) Data frame handling\nI0904 13:47:35.100664 1732 log.go:181] (0xc000d06000) (1) Data frame sent\nI0904 13:47:35.100696 1732 log.go:181] (0xc00003a0b0) (0xc000d06000) Stream removed, broadcasting: 1\nI0904 13:47:35.100882 1732 log.go:181] (0xc00003a0b0) Go away received\nI0904 13:47:35.101246 1732 log.go:181] (0xc00003a0b0) (0xc000d06000) Stream removed, broadcasting: 1\nI0904 13:47:35.101264 1732 log.go:181] (0xc00003a0b0) (0xc000c66000) Stream removed, broadcasting: 3\nI0904 13:47:35.101280 1732 log.go:181] (0xc00003a0b0) (0xc0007ac000) Stream removed, broadcasting: 5\n" Sep 4 13:47:35.112: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-755.svc.cluster.local\tcanonical name = externalsvc.services-755.svc.cluster.local.\nName:\texternalsvc.services-755.svc.cluster.local\nAddress: 10.101.211.108\n\n" STEP: deleting ReplicationController externalsvc in namespace services-755, will wait for the garbage collector to delete the pods Sep 4 13:47:35.173: INFO: Deleting ReplicationController externalsvc took: 7.694034ms Sep 4 13:47:35.573: INFO: Terminating ReplicationController externalsvc pods took: 400.187134ms Sep 4 13:47:49.726: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:47:49.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-755" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.913 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":136,"skipped":2380,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:47:49.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 4 13:47:49.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5354' Sep 4 13:47:50.005: INFO: stderr: "" Sep 4 13:47:50.005: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 4 13:47:55.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5354 -o json' Sep 4 13:47:55.218: INFO: stderr: "" Sep 4 13:47:55.218: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-04T13:47:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-04T13:47:49Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.234\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-04T13:47:53Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5354\",\n \"resourceVersion\": \"6814007\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5354/pods/e2e-test-httpd-pod\",\n \"uid\": \"5860d41d-53e4-4dc1-9436-7610b26bda33\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2rmln\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2rmln\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2rmln\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-04T13:47:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-04T13:47:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-04T13:47:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-04T13:47:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://346e639f0dbe2f40b3712222a872bf5206ad2c70b1426d5f0c78d8e099ba31b4\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-04T13:47:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.234\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.234\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-04T13:47:50Z\"\n }\n}\n" STEP: replace the image in the pod Sep 4 13:47:55.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5354' Sep 4 13:47:55.725: INFO: stderr: "" Sep 4 13:47:55.725: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 4 13:47:55.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5354' Sep 4 13:48:00.023: INFO: stderr: "" Sep 4 13:48:00.023: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:00.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5354" for this suite. • [SLOW TEST:10.250 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":137,"skipped":2400,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:00.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:48:00.182: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625" in namespace "security-context-test-294" to be "Succeeded or Failed" Sep 4 13:48:00.186: INFO: Pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789191ms Sep 4 13:48:02.253: INFO: Pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07083955s Sep 4 13:48:04.257: INFO: Pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625": Phase="Running", Reason="", readiness=true. Elapsed: 4.075192406s Sep 4 13:48:06.260: INFO: Pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07829753s Sep 4 13:48:06.260: INFO: Pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625" satisfied condition "Succeeded or Failed" Sep 4 13:48:06.265: INFO: Got logs for pod "busybox-privileged-false-15ed4b35-1958-4f03-a220-920d0c082625": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:06.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-294" for this suite. • [SLOW TEST:6.332 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2403,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:06.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-899.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-899.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-899.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-899.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-899.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-899.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 13:48:14.836: INFO: DNS probes using dns-899/dns-test-272a1d3e-c129-4238-b110-ff0f518bb62f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:15.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-899" for this suite. • [SLOW TEST:9.212 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":139,"skipped":2412,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:15.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:48:15.795: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 4 13:48:18.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6422 create -f -' Sep 4 13:48:22.646: INFO: stderr: "" Sep 4 13:48:22.646: INFO: stdout: "e2e-test-crd-publish-openapi-6134-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 4 13:48:22.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6422 delete e2e-test-crd-publish-openapi-6134-crds test-cr' Sep 4 13:48:22.776: INFO: stderr: "" Sep 4 13:48:22.776: INFO: stdout: "e2e-test-crd-publish-openapi-6134-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 4 13:48:22.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6422 apply -f -' Sep 4 13:48:23.102: INFO: stderr: "" Sep 4 13:48:23.102: INFO: stdout: "e2e-test-crd-publish-openapi-6134-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 4 13:48:23.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6422 delete e2e-test-crd-publish-openapi-6134-crds test-cr' Sep 4 13:48:23.224: INFO: stderr: "" Sep 4 13:48:23.224: INFO: stdout: "e2e-test-crd-publish-openapi-6134-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 4 13:48:23.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6134-crds' Sep 4 13:48:23.534: INFO: stderr: "" Sep 4 13:48:23.535: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6134-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:26.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6422" for this suite. • [SLOW TEST:10.952 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":140,"skipped":2422,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:26.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2087 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2087 I0904 13:48:26.756103 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2087, replica count: 2 I0904 13:48:29.806556 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:48:32.806858 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:48:32.806: INFO: Creating new exec pod Sep 4 13:48:37.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2087 execpod4st84 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 4 13:48:38.066: INFO: stderr: "I0904 13:48:37.971520 1911 log.go:181] (0xc0006d33f0) (0xc0006ca960) Create stream\nI0904 13:48:37.971626 1911 log.go:181] (0xc0006d33f0) (0xc0006ca960) Stream added, broadcasting: 1\nI0904 13:48:37.976947 1911 log.go:181] (0xc0006d33f0) Reply frame received for 1\nI0904 13:48:37.976995 1911 log.go:181] (0xc0006d33f0) (0xc0006ca000) Create stream\nI0904 13:48:37.977010 1911 log.go:181] (0xc0006d33f0) (0xc0006ca000) Stream added, broadcasting: 3\nI0904 13:48:37.977751 1911 log.go:181] (0xc0006d33f0) Reply frame received for 3\nI0904 13:48:37.977775 1911 log.go:181] (0xc0006d33f0) (0xc000ab1e00) Create stream\nI0904 13:48:37.977782 1911 log.go:181] (0xc0006d33f0) (0xc000ab1e00) Stream added, broadcasting: 5\nI0904 13:48:37.978485 1911 log.go:181] (0xc0006d33f0) Reply frame received for 5\nI0904 13:48:38.046843 1911 log.go:181] (0xc0006d33f0) Data frame received for 5\nI0904 13:48:38.046873 1911 log.go:181] (0xc000ab1e00) (5) Data frame handling\nI0904 13:48:38.046893 1911 log.go:181] (0xc000ab1e00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0904 13:48:38.054515 1911 log.go:181] (0xc0006d33f0) Data frame received for 5\nI0904 13:48:38.054538 1911 log.go:181] (0xc000ab1e00) (5) Data frame handling\nI0904 13:48:38.054551 1911 log.go:181] (0xc000ab1e00) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0904 13:48:38.054600 1911 log.go:181] (0xc0006d33f0) Data frame received for 3\nI0904 13:48:38.054614 1911 log.go:181] (0xc0006ca000) (3) Data frame handling\nI0904 13:48:38.054856 1911 log.go:181] (0xc0006d33f0) Data frame received for 5\nI0904 13:48:38.054867 1911 log.go:181] (0xc000ab1e00) (5) Data frame handling\nI0904 13:48:38.056278 1911 log.go:181] (0xc0006d33f0) Data frame received for 1\nI0904 13:48:38.056296 1911 log.go:181] (0xc0006ca960) (1) Data frame handling\nI0904 13:48:38.056307 1911 log.go:181] (0xc0006ca960) (1) Data frame sent\nI0904 13:48:38.056317 1911 log.go:181] (0xc0006d33f0) (0xc0006ca960) Stream removed, broadcasting: 1\nI0904 13:48:38.056352 1911 log.go:181] (0xc0006d33f0) Go away received\nI0904 13:48:38.056646 1911 log.go:181] (0xc0006d33f0) (0xc0006ca960) Stream removed, broadcasting: 1\nI0904 13:48:38.056664 1911 log.go:181] (0xc0006d33f0) (0xc0006ca000) Stream removed, broadcasting: 3\nI0904 13:48:38.056675 1911 log.go:181] (0xc0006d33f0) (0xc000ab1e00) Stream removed, broadcasting: 5\n" Sep 4 13:48:38.066: INFO: stdout: "" Sep 4 13:48:38.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2087 execpod4st84 -- /bin/sh -x -c nc -zv -t -w 2 10.101.5.36 80' Sep 4 13:48:38.346: INFO: stderr: "I0904 13:48:38.268447 1929 log.go:181] (0xc000cb2d10) (0xc000c98500) Create stream\nI0904 13:48:38.268517 1929 log.go:181] (0xc000cb2d10) (0xc000c98500) Stream added, broadcasting: 1\nI0904 13:48:38.272920 1929 log.go:181] (0xc000cb2d10) Reply frame received for 1\nI0904 13:48:38.272954 1929 log.go:181] (0xc000cb2d10) (0xc000d0e000) Create stream\nI0904 13:48:38.272964 1929 log.go:181] (0xc000cb2d10) (0xc000d0e000) Stream added, broadcasting: 3\nI0904 13:48:38.273812 1929 log.go:181] (0xc000cb2d10) Reply frame received for 3\nI0904 13:48:38.273853 1929 log.go:181] (0xc000cb2d10) (0xc000d0e0a0) Create stream\nI0904 13:48:38.273868 1929 log.go:181] (0xc000cb2d10) (0xc000d0e0a0) Stream added, broadcasting: 5\nI0904 13:48:38.274623 1929 log.go:181] (0xc000cb2d10) Reply frame received for 5\nI0904 13:48:38.334284 1929 log.go:181] (0xc000cb2d10) Data frame received for 3\nI0904 13:48:38.334319 1929 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0904 13:48:38.334351 1929 log.go:181] (0xc000cb2d10) Data frame received for 5\nI0904 13:48:38.334385 1929 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0904 13:48:38.334419 1929 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0904 13:48:38.334435 1929 log.go:181] (0xc000cb2d10) Data frame received for 5\nI0904 13:48:38.334449 1929 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.5.36 80\nConnection to 10.101.5.36 80 port [tcp/http] succeeded!\nI0904 13:48:38.335865 1929 log.go:181] (0xc000cb2d10) Data frame received for 1\nI0904 13:48:38.335883 1929 log.go:181] (0xc000c98500) (1) Data frame handling\nI0904 13:48:38.335895 1929 log.go:181] (0xc000c98500) (1) Data frame sent\nI0904 13:48:38.335908 1929 log.go:181] (0xc000cb2d10) (0xc000c98500) Stream removed, broadcasting: 1\nI0904 13:48:38.335927 1929 log.go:181] (0xc000cb2d10) Go away received\nI0904 13:48:38.336290 1929 log.go:181] (0xc000cb2d10) (0xc000c98500) Stream removed, broadcasting: 1\nI0904 13:48:38.336307 1929 log.go:181] (0xc000cb2d10) (0xc000d0e000) Stream removed, broadcasting: 3\nI0904 13:48:38.336314 1929 log.go:181] (0xc000cb2d10) (0xc000d0e0a0) Stream removed, broadcasting: 5\n" Sep 4 13:48:38.346: INFO: stdout: "" Sep 4 13:48:38.346: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:38.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2087" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.055 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":141,"skipped":2439,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:38.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6091.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6091.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6091.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6091.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6091.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6091.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 13:48:46.745: INFO: DNS probes using dns-6091/dns-test-0fccb546-1675-4906-ac34-69b61cc02084 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:46.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6091" for this suite. • [SLOW TEST:8.274 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":142,"skipped":2443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:46.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-36b6f3c2-b5dc-45a2-86a5-f5da3a79c4b1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-36b6f3c2-b5dc-45a2-86a5-f5da3a79c4b1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:48:58.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2851" for this suite. • [SLOW TEST:11.330 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2468,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:48:58.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:50:58.380: INFO: Deleting pod "var-expansion-267685b7-3d20-432a-aea1-c1805a15336a" in namespace "var-expansion-8320" Sep 4 13:50:58.385: INFO: Wait up to 5m0s for pod "var-expansion-267685b7-3d20-432a-aea1-c1805a15336a" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:51:02.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8320" for this suite. • [SLOW TEST:124.219 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":144,"skipped":2473,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:51:02.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:51:02.491: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 4 13:51:05.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6238 create -f -' Sep 4 13:51:09.205: INFO: stderr: "" Sep 4 13:51:09.205: INFO: stdout: "e2e-test-crd-publish-openapi-4741-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 4 13:51:09.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6238 delete e2e-test-crd-publish-openapi-4741-crds test-cr' Sep 4 13:51:09.354: INFO: stderr: "" Sep 4 13:51:09.354: INFO: stdout: "e2e-test-crd-publish-openapi-4741-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 4 13:51:09.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6238 apply -f -' Sep 4 13:51:09.710: INFO: stderr: "" Sep 4 13:51:09.710: INFO: stdout: "e2e-test-crd-publish-openapi-4741-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 4 13:51:09.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6238 delete e2e-test-crd-publish-openapi-4741-crds test-cr' Sep 4 13:51:09.866: INFO: stderr: "" Sep 4 13:51:09.866: INFO: stdout: "e2e-test-crd-publish-openapi-4741-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 4 13:51:09.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4741-crds' Sep 4 13:51:10.157: INFO: stderr: "" Sep 4 13:51:10.158: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4741-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:51:13.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6238" for this suite. • [SLOW TEST:10.727 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":145,"skipped":2476,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:51:13.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 4 13:51:13.187: INFO: created test-pod-1 Sep 4 13:51:13.255: INFO: created test-pod-2 Sep 4 13:51:13.259: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:51:13.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7164" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":146,"skipped":2485,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:51:13.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1305 Sep 4 13:51:19.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 4 13:51:19.968: INFO: stderr: "I0904 13:51:19.875972 2037 log.go:181] (0xc000386fd0) (0xc00038fe00) Create stream\nI0904 13:51:19.876029 2037 log.go:181] (0xc000386fd0) (0xc00038fe00) Stream added, broadcasting: 1\nI0904 13:51:19.878774 2037 log.go:181] (0xc000386fd0) Reply frame received for 1\nI0904 13:51:19.878811 2037 log.go:181] (0xc000386fd0) (0xc00054a320) Create stream\nI0904 13:51:19.878819 2037 log.go:181] (0xc000386fd0) (0xc00054a320) Stream added, broadcasting: 3\nI0904 13:51:19.879712 2037 log.go:181] (0xc000386fd0) Reply frame received for 3\nI0904 13:51:19.880158 2037 log.go:181] (0xc000386fd0) (0xc000496460) Create stream\nI0904 13:51:19.880183 2037 log.go:181] (0xc000386fd0) (0xc000496460) Stream added, broadcasting: 5\nI0904 13:51:19.881505 2037 log.go:181] (0xc000386fd0) Reply frame received for 5\nI0904 13:51:19.955063 2037 log.go:181] (0xc000386fd0) Data frame received for 3\nI0904 13:51:19.955100 2037 log.go:181] (0xc00054a320) (3) Data frame handling\nI0904 13:51:19.955120 2037 log.go:181] (0xc00054a320) (3) Data frame sent\nI0904 13:51:19.955154 2037 log.go:181] (0xc000386fd0) Data frame received for 3\nI0904 13:51:19.955163 2037 log.go:181] (0xc00054a320) (3) Data frame handling\nI0904 13:51:19.955192 2037 log.go:181] (0xc000386fd0) Data frame received for 5\nI0904 13:51:19.955201 2037 log.go:181] (0xc000496460) (5) Data frame handling\nI0904 13:51:19.955210 2037 log.go:181] (0xc000496460) (5) Data frame sent\nI0904 13:51:19.955217 2037 log.go:181] (0xc000386fd0) Data frame received for 5\nI0904 13:51:19.955223 2037 log.go:181] (0xc000496460) (5) Data frame handling\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0904 13:51:19.957312 2037 log.go:181] (0xc000386fd0) Data frame received for 1\nI0904 13:51:19.957342 2037 log.go:181] (0xc00038fe00) (1) Data frame handling\nI0904 13:51:19.957354 2037 log.go:181] (0xc00038fe00) (1) Data frame sent\nI0904 13:51:19.957371 2037 log.go:181] (0xc000386fd0) (0xc00038fe00) Stream removed, broadcasting: 1\nI0904 13:51:19.957391 2037 log.go:181] (0xc000386fd0) Go away received\nI0904 13:51:19.957706 2037 log.go:181] (0xc000386fd0) (0xc00038fe00) Stream removed, broadcasting: 1\nI0904 13:51:19.957729 2037 log.go:181] (0xc000386fd0) (0xc00054a320) Stream removed, broadcasting: 3\nI0904 13:51:19.957740 2037 log.go:181] (0xc000386fd0) (0xc000496460) Stream removed, broadcasting: 5\n" Sep 4 13:51:19.968: INFO: stdout: "iptables" Sep 4 13:51:19.968: INFO: proxyMode: iptables Sep 4 13:51:19.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:19.984: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:51:21.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:21.989: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:51:23.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:23.987: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:51:25.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:25.988: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:51:27.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:27.988: INFO: Pod kube-proxy-mode-detector still exists Sep 4 13:51:29.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 4 13:51:29.988: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1305 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1305 I0904 13:51:30.056392 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1305, replica count: 3 I0904 13:51:33.106749 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:51:36.106948 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 13:51:39.107171 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 13:51:39.112: INFO: Creating new exec pod Sep 4 13:51:44.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 4 13:51:44.358: INFO: stderr: "I0904 13:51:44.289944 2055 log.go:181] (0xc00030c000) (0xc000a84000) Create stream\nI0904 13:51:44.290005 2055 log.go:181] (0xc00030c000) (0xc000a84000) Stream added, broadcasting: 1\nI0904 13:51:44.291527 2055 log.go:181] (0xc00030c000) Reply frame received for 1\nI0904 13:51:44.291551 2055 log.go:181] (0xc00030c000) (0xc000a840a0) Create stream\nI0904 13:51:44.291559 2055 log.go:181] (0xc00030c000) (0xc000a840a0) Stream added, broadcasting: 3\nI0904 13:51:44.292385 2055 log.go:181] (0xc00030c000) Reply frame received for 3\nI0904 13:51:44.292416 2055 log.go:181] (0xc00030c000) (0xc000c97540) Create stream\nI0904 13:51:44.292427 2055 log.go:181] (0xc00030c000) (0xc000c97540) Stream added, broadcasting: 5\nI0904 13:51:44.293263 2055 log.go:181] (0xc00030c000) Reply frame received for 5\nI0904 13:51:44.345733 2055 log.go:181] (0xc00030c000) Data frame received for 5\nI0904 13:51:44.345784 2055 log.go:181] (0xc000c97540) (5) Data frame handling\nI0904 13:51:44.345801 2055 log.go:181] (0xc000c97540) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0904 13:51:44.345825 2055 log.go:181] (0xc00030c000) Data frame received for 3\nI0904 13:51:44.345838 2055 log.go:181] (0xc000a840a0) (3) Data frame handling\nI0904 13:51:44.345976 2055 log.go:181] (0xc00030c000) Data frame received for 5\nI0904 13:51:44.346000 2055 log.go:181] (0xc000c97540) (5) Data frame handling\nI0904 13:51:44.347481 2055 log.go:181] (0xc00030c000) Data frame received for 1\nI0904 13:51:44.347508 2055 log.go:181] (0xc000a84000) (1) Data frame handling\nI0904 13:51:44.347518 2055 log.go:181] (0xc000a84000) (1) Data frame sent\nI0904 13:51:44.347529 2055 log.go:181] (0xc00030c000) (0xc000a84000) Stream removed, broadcasting: 1\nI0904 13:51:44.347544 2055 log.go:181] (0xc00030c000) Go away received\nI0904 13:51:44.347931 2055 log.go:181] (0xc00030c000) (0xc000a84000) Stream removed, broadcasting: 1\nI0904 13:51:44.347946 2055 log.go:181] (0xc00030c000) (0xc000a840a0) Stream removed, broadcasting: 3\nI0904 13:51:44.347953 2055 log.go:181] (0xc00030c000) (0xc000c97540) Stream removed, broadcasting: 5\n" Sep 4 13:51:44.359: INFO: stdout: "" Sep 4 13:51:44.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c nc -zv -t -w 2 10.97.194.132 80' Sep 4 13:51:44.619: INFO: stderr: "I0904 13:51:44.511351 2073 log.go:181] (0xc0009c6fd0) (0xc000480140) Create stream\nI0904 13:51:44.511426 2073 log.go:181] (0xc0009c6fd0) (0xc000480140) Stream added, broadcasting: 1\nI0904 13:51:44.516892 2073 log.go:181] (0xc0009c6fd0) Reply frame received for 1\nI0904 13:51:44.516975 2073 log.go:181] (0xc0009c6fd0) (0xc0003cd360) Create stream\nI0904 13:51:44.517079 2073 log.go:181] (0xc0009c6fd0) (0xc0003cd360) Stream added, broadcasting: 3\nI0904 13:51:44.518182 2073 log.go:181] (0xc0009c6fd0) Reply frame received for 3\nI0904 13:51:44.518208 2073 log.go:181] (0xc0009c6fd0) (0xc000481680) Create stream\nI0904 13:51:44.518216 2073 log.go:181] (0xc0009c6fd0) (0xc000481680) Stream added, broadcasting: 5\nI0904 13:51:44.519086 2073 log.go:181] (0xc0009c6fd0) Reply frame received for 5\nI0904 13:51:44.610331 2073 log.go:181] (0xc0009c6fd0) Data frame received for 5\nI0904 13:51:44.610361 2073 log.go:181] (0xc000481680) (5) Data frame handling\nI0904 13:51:44.610368 2073 log.go:181] (0xc000481680) (5) Data frame sent\nI0904 13:51:44.610373 2073 log.go:181] (0xc0009c6fd0) Data frame received for 5\nI0904 13:51:44.610378 2073 log.go:181] (0xc000481680) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.194.132 80\nConnection to 10.97.194.132 80 port [tcp/http] succeeded!\nI0904 13:51:44.610396 2073 log.go:181] (0xc0009c6fd0) Data frame received for 3\nI0904 13:51:44.610400 2073 log.go:181] (0xc0003cd360) (3) Data frame handling\nI0904 13:51:44.611894 2073 log.go:181] (0xc0009c6fd0) Data frame received for 1\nI0904 13:51:44.611924 2073 log.go:181] (0xc000480140) (1) Data frame handling\nI0904 13:51:44.611940 2073 log.go:181] (0xc000480140) (1) Data frame sent\nI0904 13:51:44.612010 2073 log.go:181] (0xc0009c6fd0) (0xc000480140) Stream removed, broadcasting: 1\nI0904 13:51:44.612095 2073 log.go:181] (0xc0009c6fd0) Go away received\nI0904 13:51:44.612346 2073 log.go:181] (0xc0009c6fd0) (0xc000480140) Stream removed, broadcasting: 1\nI0904 13:51:44.612359 2073 log.go:181] (0xc0009c6fd0) (0xc0003cd360) Stream removed, broadcasting: 3\nI0904 13:51:44.612365 2073 log.go:181] (0xc0009c6fd0) (0xc000481680) Stream removed, broadcasting: 5\n" Sep 4 13:51:44.619: INFO: stdout: "" Sep 4 13:51:44.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.194.132:80/ ; done' Sep 4 13:51:44.928: INFO: stderr: "I0904 13:51:44.753054 2091 log.go:181] (0xc000bdf340) (0xc00065a820) Create stream\nI0904 13:51:44.753099 2091 log.go:181] (0xc000bdf340) (0xc00065a820) Stream added, broadcasting: 1\nI0904 13:51:44.755563 2091 log.go:181] (0xc000bdf340) Reply frame received for 1\nI0904 13:51:44.755614 2091 log.go:181] (0xc000bdf340) (0xc000b9e000) Create stream\nI0904 13:51:44.755630 2091 log.go:181] (0xc000bdf340) (0xc000b9e000) Stream added, broadcasting: 3\nI0904 13:51:44.756568 2091 log.go:181] (0xc000bdf340) Reply frame received for 3\nI0904 13:51:44.756613 2091 log.go:181] (0xc000bdf340) (0xc000b9e280) Create stream\nI0904 13:51:44.756625 2091 log.go:181] (0xc000bdf340) (0xc000b9e280) Stream added, broadcasting: 5\nI0904 13:51:44.757602 2091 log.go:181] (0xc000bdf340) Reply frame received for 5\nI0904 13:51:44.821303 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.821340 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.821353 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.821374 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.821383 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.821395 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.824016 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.824038 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.824060 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.824642 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.824665 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.824671 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.824684 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.824706 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.824845 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.831467 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.831503 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.831517 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.831771 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.831790 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.831801 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.831822 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.831843 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.831860 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.835520 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.835536 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.835551 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.836135 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.836163 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.836174 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.836205 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.836226 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.836246 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.841820 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.841842 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.841866 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.842517 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.842544 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.842556 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.842572 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.842579 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.842586 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.849214 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.849245 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.849274 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.849717 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.849755 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.849773 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.849802 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.849822 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.849856 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.855044 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.855063 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.855075 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.855574 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.855604 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.855642 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.855667 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.855701 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.855730 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.862050 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.862070 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.862094 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.862917 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.862944 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.862988 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.863028 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.863043 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.863077 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\nI0904 13:51:44.869119 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.869141 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.869161 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.870227 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.870258 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.870270 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\nI0904 13:51:44.870279 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.870288 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.870307 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\nI0904 13:51:44.870438 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.870451 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.870459 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.876684 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.876707 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.876857 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.877161 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.877177 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.877185 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.877198 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.877203 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.877209 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.880196 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.880220 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.880236 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.880525 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.880543 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.880553 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.880567 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.880574 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.880582 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.887291 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.887325 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.887356 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.887619 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.887645 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.887653 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.887669 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.887689 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.887705 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.892470 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.892487 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.892501 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.893142 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.893169 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.893183 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.893199 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.893204 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.893210 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.896924 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.896941 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.896947 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.897482 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.897502 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0904 13:51:44.897520 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.897544 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.897568 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.897585 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\nI0904 13:51:44.897610 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.897618 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.897636 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n 2 http://10.97.194.132:80/\nI0904 13:51:44.903266 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.903280 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.903287 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.904191 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.904217 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.904261 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.904279 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.904297 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.904310 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.907875 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.907892 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.907909 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.908538 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.908553 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.908564 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.908578 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.908602 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.908614 2091 log.go:181] (0xc000b9e280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:44.914502 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.914519 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.914530 2091 log.go:181] (0xc000b9e000) (3) Data frame sent\nI0904 13:51:44.915267 2091 log.go:181] (0xc000bdf340) Data frame received for 5\nI0904 13:51:44.915288 2091 log.go:181] (0xc000bdf340) Data frame received for 3\nI0904 13:51:44.915313 2091 log.go:181] (0xc000b9e000) (3) Data frame handling\nI0904 13:51:44.915346 2091 log.go:181] (0xc000b9e280) (5) Data frame handling\nI0904 13:51:44.917098 2091 log.go:181] (0xc000bdf340) Data frame received for 1\nI0904 13:51:44.917118 2091 log.go:181] (0xc00065a820) (1) Data frame handling\nI0904 13:51:44.917132 2091 log.go:181] (0xc00065a820) (1) Data frame sent\nI0904 13:51:44.917181 2091 log.go:181] (0xc000bdf340) (0xc00065a820) Stream removed, broadcasting: 1\nI0904 13:51:44.917202 2091 log.go:181] (0xc000bdf340) Go away received\nI0904 13:51:44.917638 2091 log.go:181] (0xc000bdf340) (0xc00065a820) Stream removed, broadcasting: 1\nI0904 13:51:44.917657 2091 log.go:181] (0xc000bdf340) (0xc000b9e000) Stream removed, broadcasting: 3\nI0904 13:51:44.917667 2091 log.go:181] (0xc000bdf340) (0xc000b9e280) Stream removed, broadcasting: 5\n" Sep 4 13:51:44.929: INFO: stdout: "\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk\naffinity-clusterip-timeout-2wcpk" Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Received response from host: affinity-clusterip-timeout-2wcpk Sep 4 13:51:44.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.194.132:80/' Sep 4 13:51:45.152: INFO: stderr: "I0904 13:51:45.064977 2109 log.go:181] (0xc0009c93f0) (0xc0009c08c0) Create stream\nI0904 13:51:45.065037 2109 log.go:181] (0xc0009c93f0) (0xc0009c08c0) Stream added, broadcasting: 1\nI0904 13:51:45.071000 2109 log.go:181] (0xc0009c93f0) Reply frame received for 1\nI0904 13:51:45.071037 2109 log.go:181] (0xc0009c93f0) (0xc0005e21e0) Create stream\nI0904 13:51:45.071051 2109 log.go:181] (0xc0009c93f0) (0xc0005e21e0) Stream added, broadcasting: 3\nI0904 13:51:45.072054 2109 log.go:181] (0xc0009c93f0) Reply frame received for 3\nI0904 13:51:45.072087 2109 log.go:181] (0xc0009c93f0) (0xc0003cc5a0) Create stream\nI0904 13:51:45.072098 2109 log.go:181] (0xc0009c93f0) (0xc0003cc5a0) Stream added, broadcasting: 5\nI0904 13:51:45.073214 2109 log.go:181] (0xc0009c93f0) Reply frame received for 5\nI0904 13:51:45.135454 2109 log.go:181] (0xc0009c93f0) Data frame received for 5\nI0904 13:51:45.135479 2109 log.go:181] (0xc0003cc5a0) (5) Data frame handling\nI0904 13:51:45.135498 2109 log.go:181] (0xc0003cc5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:51:45.139254 2109 log.go:181] (0xc0009c93f0) Data frame received for 3\nI0904 13:51:45.139273 2109 log.go:181] (0xc0005e21e0) (3) Data frame handling\nI0904 13:51:45.139283 2109 log.go:181] (0xc0005e21e0) (3) Data frame sent\nI0904 13:51:45.139851 2109 log.go:181] (0xc0009c93f0) Data frame received for 5\nI0904 13:51:45.139951 2109 log.go:181] (0xc0003cc5a0) (5) Data frame handling\nI0904 13:51:45.140002 2109 log.go:181] (0xc0009c93f0) Data frame received for 3\nI0904 13:51:45.140033 2109 log.go:181] (0xc0005e21e0) (3) Data frame handling\nI0904 13:51:45.141562 2109 log.go:181] (0xc0009c93f0) Data frame received for 1\nI0904 13:51:45.141585 2109 log.go:181] (0xc0009c08c0) (1) Data frame handling\nI0904 13:51:45.141595 2109 log.go:181] (0xc0009c08c0) (1) Data frame sent\nI0904 13:51:45.141607 2109 log.go:181] (0xc0009c93f0) (0xc0009c08c0) Stream removed, broadcasting: 1\nI0904 13:51:45.141631 2109 log.go:181] (0xc0009c93f0) Go away received\nI0904 13:51:45.141952 2109 log.go:181] (0xc0009c93f0) (0xc0009c08c0) Stream removed, broadcasting: 1\nI0904 13:51:45.141970 2109 log.go:181] (0xc0009c93f0) (0xc0005e21e0) Stream removed, broadcasting: 3\nI0904 13:51:45.141978 2109 log.go:181] (0xc0009c93f0) (0xc0003cc5a0) Stream removed, broadcasting: 5\n" Sep 4 13:51:45.153: INFO: stdout: "affinity-clusterip-timeout-2wcpk" Sep 4 13:52:00.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.194.132:80/' Sep 4 13:52:00.356: INFO: stderr: "I0904 13:52:00.296323 2127 log.go:181] (0xc00018c420) (0xc000971180) Create stream\nI0904 13:52:00.296398 2127 log.go:181] (0xc00018c420) (0xc000971180) Stream added, broadcasting: 1\nI0904 13:52:00.298160 2127 log.go:181] (0xc00018c420) Reply frame received for 1\nI0904 13:52:00.298209 2127 log.go:181] (0xc00018c420) (0xc000a1ca00) Create stream\nI0904 13:52:00.298228 2127 log.go:181] (0xc00018c420) (0xc000a1ca00) Stream added, broadcasting: 3\nI0904 13:52:00.299323 2127 log.go:181] (0xc00018c420) Reply frame received for 3\nI0904 13:52:00.299361 2127 log.go:181] (0xc00018c420) (0xc000971ae0) Create stream\nI0904 13:52:00.299379 2127 log.go:181] (0xc00018c420) (0xc000971ae0) Stream added, broadcasting: 5\nI0904 13:52:00.300323 2127 log.go:181] (0xc00018c420) Reply frame received for 5\nI0904 13:52:00.348858 2127 log.go:181] (0xc00018c420) Data frame received for 5\nI0904 13:52:00.348887 2127 log.go:181] (0xc000971ae0) (5) Data frame handling\nI0904 13:52:00.348905 2127 log.go:181] (0xc000971ae0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:52:00.349447 2127 log.go:181] (0xc00018c420) Data frame received for 3\nI0904 13:52:00.349472 2127 log.go:181] (0xc000a1ca00) (3) Data frame handling\nI0904 13:52:00.349486 2127 log.go:181] (0xc000a1ca00) (3) Data frame sent\nI0904 13:52:00.349493 2127 log.go:181] (0xc00018c420) Data frame received for 3\nI0904 13:52:00.349498 2127 log.go:181] (0xc000a1ca00) (3) Data frame handling\nI0904 13:52:00.349737 2127 log.go:181] (0xc00018c420) Data frame received for 5\nI0904 13:52:00.349754 2127 log.go:181] (0xc000971ae0) (5) Data frame handling\nI0904 13:52:00.350663 2127 log.go:181] (0xc00018c420) Data frame received for 1\nI0904 13:52:00.350687 2127 log.go:181] (0xc000971180) (1) Data frame handling\nI0904 13:52:00.350702 2127 log.go:181] (0xc000971180) (1) Data frame sent\nI0904 13:52:00.350732 2127 log.go:181] (0xc00018c420) (0xc000971180) Stream removed, broadcasting: 1\nI0904 13:52:00.350754 2127 log.go:181] (0xc00018c420) Go away received\nI0904 13:52:00.351041 2127 log.go:181] (0xc00018c420) (0xc000971180) Stream removed, broadcasting: 1\nI0904 13:52:00.351052 2127 log.go:181] (0xc00018c420) (0xc000a1ca00) Stream removed, broadcasting: 3\nI0904 13:52:00.351057 2127 log.go:181] (0xc00018c420) (0xc000971ae0) Stream removed, broadcasting: 5\n" Sep 4 13:52:00.357: INFO: stdout: "affinity-clusterip-timeout-2wcpk" Sep 4 13:52:15.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1305 execpod-affinityp78tf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.194.132:80/' Sep 4 13:52:15.599: INFO: stderr: "I0904 13:52:15.512637 2145 log.go:181] (0xc000642b00) (0xc000716aa0) Create stream\nI0904 13:52:15.512688 2145 log.go:181] (0xc000642b00) (0xc000716aa0) Stream added, broadcasting: 1\nI0904 13:52:15.514135 2145 log.go:181] (0xc000642b00) Reply frame received for 1\nI0904 13:52:15.514181 2145 log.go:181] (0xc000642b00) (0xc000738000) Create stream\nI0904 13:52:15.514191 2145 log.go:181] (0xc000642b00) (0xc000738000) Stream added, broadcasting: 3\nI0904 13:52:15.514831 2145 log.go:181] (0xc000642b00) Reply frame received for 3\nI0904 13:52:15.514855 2145 log.go:181] (0xc000642b00) (0xc0009a43c0) Create stream\nI0904 13:52:15.514862 2145 log.go:181] (0xc000642b00) (0xc0009a43c0) Stream added, broadcasting: 5\nI0904 13:52:15.515391 2145 log.go:181] (0xc000642b00) Reply frame received for 5\nI0904 13:52:15.580184 2145 log.go:181] (0xc000642b00) Data frame received for 5\nI0904 13:52:15.580212 2145 log.go:181] (0xc0009a43c0) (5) Data frame handling\nI0904 13:52:15.580228 2145 log.go:181] (0xc0009a43c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.97.194.132:80/\nI0904 13:52:15.583977 2145 log.go:181] (0xc000642b00) Data frame received for 3\nI0904 13:52:15.584019 2145 log.go:181] (0xc000738000) (3) Data frame handling\nI0904 13:52:15.584049 2145 log.go:181] (0xc000738000) (3) Data frame sent\nI0904 13:52:15.585244 2145 log.go:181] (0xc000642b00) Data frame received for 3\nI0904 13:52:15.585270 2145 log.go:181] (0xc000738000) (3) Data frame handling\nI0904 13:52:15.585292 2145 log.go:181] (0xc000642b00) Data frame received for 5\nI0904 13:52:15.585318 2145 log.go:181] (0xc0009a43c0) (5) Data frame handling\nI0904 13:52:15.587001 2145 log.go:181] (0xc000642b00) Data frame received for 1\nI0904 13:52:15.587056 2145 log.go:181] (0xc000716aa0) (1) Data frame handling\nI0904 13:52:15.587072 2145 log.go:181] (0xc000716aa0) (1) Data frame sent\nI0904 13:52:15.587085 2145 log.go:181] (0xc000642b00) (0xc000716aa0) Stream removed, broadcasting: 1\nI0904 13:52:15.587100 2145 log.go:181] (0xc000642b00) Go away received\nI0904 13:52:15.587683 2145 log.go:181] (0xc000642b00) (0xc000716aa0) Stream removed, broadcasting: 1\nI0904 13:52:15.587709 2145 log.go:181] (0xc000642b00) (0xc000738000) Stream removed, broadcasting: 3\nI0904 13:52:15.587722 2145 log.go:181] (0xc000642b00) (0xc0009a43c0) Stream removed, broadcasting: 5\n" Sep 4 13:52:15.599: INFO: stdout: "affinity-clusterip-timeout-z9jbr" Sep 4 13:52:15.599: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1305, will wait for the garbage collector to delete the pods Sep 4 13:52:15.720: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 20.310111ms Sep 4 13:52:18.021: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 2.300233388s [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:52:30.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1305" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:76.543 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":147,"skipped":2503,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:52:30.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 4 13:52:30.261: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:52:48.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3538" for this suite. • [SLOW TEST:18.460 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":148,"skipped":2514,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:52:48.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 4 13:52:48.781: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:48.789: INFO: Number of nodes with available pods: 0 Sep 4 13:52:48.789: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:49.794: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:49.798: INFO: Number of nodes with available pods: 0 Sep 4 13:52:49.798: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:50.794: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:50.976: INFO: Number of nodes with available pods: 0 Sep 4 13:52:50.976: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:52.024: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:52.234: INFO: Number of nodes with available pods: 0 Sep 4 13:52:52.234: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:52.793: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:52.799: INFO: Number of nodes with available pods: 0 Sep 4 13:52:52.799: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:53.888: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:53.891: INFO: Number of nodes with available pods: 0 Sep 4 13:52:53.891: INFO: Node latest-worker is running more than one daemon pod Sep 4 13:52:54.793: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:54.796: INFO: Number of nodes with available pods: 2 Sep 4 13:52:54.796: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 4 13:52:54.848: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 13:52:54.866: INFO: Number of nodes with available pods: 2 Sep 4 13:52:54.866: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5816, will wait for the garbage collector to delete the pods Sep 4 13:52:56.025: INFO: Deleting DaemonSet.extensions daemon-set took: 6.642335ms Sep 4 13:52:56.626: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.345377ms Sep 4 13:53:10.130: INFO: Number of nodes with available pods: 0 Sep 4 13:53:10.130: INFO: Number of running nodes: 0, number of available pods: 0 Sep 4 13:53:10.134: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5816/daemonsets","resourceVersion":"6815507"},"items":null} Sep 4 13:53:10.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5816/pods","resourceVersion":"6815507"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:10.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5816" for this suite. • [SLOW TEST:21.554 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":149,"skipped":2521,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:10.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-5c76cb0a-167c-487f-b8fd-2b5ef2b63c54 STEP: Creating a pod to test consume secrets Sep 4 13:53:10.293: INFO: Waiting up to 5m0s for pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c" in namespace "secrets-5007" to be "Succeeded or Failed" Sep 4 13:53:10.297: INFO: Pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.705277ms Sep 4 13:53:12.301: INFO: Pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008278514s Sep 4 13:53:14.305: INFO: Pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012126987s Sep 4 13:53:16.309: INFO: Pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015746597s STEP: Saw pod success Sep 4 13:53:16.309: INFO: Pod "pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c" satisfied condition "Succeeded or Failed" Sep 4 13:53:16.311: INFO: Trying to get logs from node latest-worker pod pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c container secret-volume-test: STEP: delete the pod Sep 4 13:53:16.382: INFO: Waiting for pod pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c to disappear Sep 4 13:53:16.386: INFO: Pod pod-secrets-d95f5ed4-cdf5-4b17-ac31-e45b4925246c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:16.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5007" for this suite. • [SLOW TEST:6.224 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:16.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:16.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4527" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":151,"skipped":2555,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:16.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:32.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-668" for this suite. • [SLOW TEST:16.288 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":152,"skipped":2564,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:32.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-3474/configmap-test-3560c0a9-b02c-46e9-974b-e37fcf3f253d STEP: Creating a pod to test consume configMaps Sep 4 13:53:32.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b" in namespace "configmap-3474" to be "Succeeded or Failed" Sep 4 13:53:33.004: INFO: Pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.972774ms Sep 4 13:53:35.008: INFO: Pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030879805s Sep 4 13:53:37.012: INFO: Pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035199717s Sep 4 13:53:39.016: INFO: Pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039633994s STEP: Saw pod success Sep 4 13:53:39.017: INFO: Pod "pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b" satisfied condition "Succeeded or Failed" Sep 4 13:53:39.019: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b container env-test: STEP: delete the pod Sep 4 13:53:39.077: INFO: Waiting for pod pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b to disappear Sep 4 13:53:39.123: INFO: Pod pod-configmaps-b5ff4e44-f798-4388-ab15-6d3e3b13692b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:39.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3474" for this suite. • [SLOW TEST:6.299 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":153,"skipped":2575,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:39.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:53:39.198: INFO: Creating deployment "test-recreate-deployment" Sep 4 13:53:39.214: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 4 13:53:39.244: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 4 13:53:41.381: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 4 13:53:41.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824419, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824419, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824419, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824419, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:53:43.387: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 4 13:53:43.394: INFO: Updating deployment test-recreate-deployment Sep 4 13:53:43.394: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 4 13:53:44.152: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3904 /apis/apps/v1/namespaces/deployment-3904/deployments/test-recreate-deployment 75573e0a-9a8f-4566-99f0-1f0120bf6c9f 6815764 2 2020-09-04 13:53:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-04 13:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 13:53:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005107138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-04 13:53:43 +0000 UTC,LastTransitionTime:2020-09-04 13:53:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-04 13:53:44 +0000 UTC,LastTransitionTime:2020-09-04 13:53:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 4 13:53:44.252: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-3904 /apis/apps/v1/namespaces/deployment-3904/replicasets/test-recreate-deployment-f79dd4667 4ba30a9d-3483-4d1e-8b2e-66f7fc816812 6815762 1 2020-09-04 13:53:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 75573e0a-9a8f-4566-99f0-1f0120bf6c9f 0xc00512a870 0xc00512a871}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75573e0a-9a8f-4566-99f0-1f0120bf6c9f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00512a908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:53:44.252: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 4 13:53:44.252: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-3904 /apis/apps/v1/namespaces/deployment-3904/replicasets/test-recreate-deployment-c96cf48f 65ef1269-24b4-4e37-a6a3-7e4c68f7bcba 6815753 2 2020-09-04 13:53:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 75573e0a-9a8f-4566-99f0-1f0120bf6c9f 0xc00512a74f 0xc00512a760}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75573e0a-9a8f-4566-99f0-1f0120bf6c9f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00512a7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:53:44.259: INFO: Pod "test-recreate-deployment-f79dd4667-mdvp2" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-mdvp2 test-recreate-deployment-f79dd4667- deployment-3904 /api/v1/namespaces/deployment-3904/pods/test-recreate-deployment-f79dd4667-mdvp2 918de16d-ec52-47aa-b02d-b6dd3a5f76b0 6815758 0 2020-09-04 13:53:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 4ba30a9d-3483-4d1e-8b2e-66f7fc816812 0xc00512af50 0xc00512af51}] [] [{kube-controller-manager Update v1 2020-09-04 13:53:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ba30a9d-3483-4d1e-8b2e-66f7fc816812\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wwrnt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wwrnt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wwrnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:53:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:44.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3904" for this suite. • [SLOW TEST:5.358 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":154,"skipped":2579,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:44.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:53:44.941: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:53:51.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8506" for this suite. • [SLOW TEST:6.746 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2596,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:53:51.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 4 13:53:51.284: INFO: namespace kubectl-5186 Sep 4 13:53:51.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5186' Sep 4 13:53:51.662: INFO: stderr: "" Sep 4 13:53:51.662: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 4 13:53:52.666: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:53:52.666: INFO: Found 0 / 1 Sep 4 13:53:53.999: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:53:53.999: INFO: Found 0 / 1 Sep 4 13:53:54.689: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:53:54.689: INFO: Found 0 / 1 Sep 4 13:53:55.667: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:53:55.667: INFO: Found 1 / 1 Sep 4 13:53:55.667: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 4 13:53:55.669: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 13:53:55.669: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 4 13:53:55.669: INFO: wait on agnhost-primary startup in kubectl-5186 Sep 4 13:53:55.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs agnhost-primary-jgnwd agnhost-primary --namespace=kubectl-5186' Sep 4 13:53:55.787: INFO: stderr: "" Sep 4 13:53:55.787: INFO: stdout: "Paused\n" STEP: exposing RC Sep 4 13:53:55.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5186' Sep 4 13:53:55.975: INFO: stderr: "" Sep 4 13:53:55.975: INFO: stdout: "service/rm2 exposed\n" Sep 4 13:53:55.983: INFO: Service rm2 in namespace kubectl-5186 found. STEP: exposing service Sep 4 13:53:57.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5186' Sep 4 13:53:58.158: INFO: stderr: "" Sep 4 13:53:58.158: INFO: stdout: "service/rm3 exposed\n" Sep 4 13:53:58.165: INFO: Service rm3 in namespace kubectl-5186 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:54:00.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5186" for this suite. • [SLOW TEST:8.941 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":156,"skipped":2598,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:54:00.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:54:00.331: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 4 13:54:05.342: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 4 13:54:05.342: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 4 13:54:07.346: INFO: Creating deployment "test-rollover-deployment" Sep 4 13:54:07.380: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 4 13:54:09.389: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 4 13:54:09.395: INFO: Ensure that both replica sets have 1 created replica Sep 4 13:54:09.400: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 4 13:54:09.407: INFO: Updating deployment test-rollover-deployment Sep 4 13:54:09.407: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 4 13:54:11.461: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 4 13:54:11.467: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 4 13:54:11.472: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:11.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824449, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:13.479: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:13.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824449, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:15.482: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:15.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824454, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:17.480: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:17.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824454, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:19.481: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:19.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824454, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:21.510: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:21.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824454, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:23.481: INFO: all replica sets need to contain the pod-template-hash label Sep 4 13:54:23.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824454, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824447, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:54:25.668: INFO: Sep 4 13:54:25.668: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 4 13:54:25.682: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8517 /apis/apps/v1/namespaces/deployment-8517/deployments/test-rollover-deployment 0d891595-188d-4b2e-9edc-702fe89e4304 6816061 2 2020-09-04 13:54:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-04 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 13:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005ecc968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-04 13:54:07 +0000 UTC,LastTransitionTime:2020-09-04 13:54:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-04 13:54:24 +0000 UTC,LastTransitionTime:2020-09-04 13:54:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 4 13:54:25.684: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-8517 /apis/apps/v1/namespaces/deployment-8517/replicasets/test-rollover-deployment-5797c7764 c2aad706-596a-43b7-bd1d-ab2749a482ec 6816050 2 2020-09-04 13:54:09 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0d891595-188d-4b2e-9edc-702fe89e4304 0xc005304a70 0xc005304a71}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d891595-188d-4b2e-9edc-702fe89e4304\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005304b08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:54:25.684: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 4 13:54:25.684: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8517 /apis/apps/v1/namespaces/deployment-8517/replicasets/test-rollover-controller b6b387c0-7479-44eb-9700-c915a591aedc 6816060 2 2020-09-04 13:54:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0d891595-188d-4b2e-9edc-702fe89e4304 0xc00530491f 0xc005304930}] [] [{e2e.test Update apps/v1 2020-09-04 13:54:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 13:54:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d891595-188d-4b2e-9edc-702fe89e4304\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0053049c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:54:25.684: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-8517 /apis/apps/v1/namespaces/deployment-8517/replicasets/test-rollover-deployment-78bc8b888c 00d25238-17d0-424e-b25b-709c650bc3d5 6815996 2 2020-09-04 13:54:07 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0d891595-188d-4b2e-9edc-702fe89e4304 0xc005304b87 0xc005304b88}] [] [{kube-controller-manager Update apps/v1 2020-09-04 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d891595-188d-4b2e-9edc-702fe89e4304\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005304c28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 13:54:25.686: INFO: Pod "test-rollover-deployment-5797c7764-b47r7" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-b47r7 test-rollover-deployment-5797c7764- deployment-8517 /api/v1/namespaces/deployment-8517/pods/test-rollover-deployment-5797c7764-b47r7 ad459857-91db-4dfc-acb0-0f61f1af9114 6816018 0 2020-09-04 13:54:09 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 c2aad706-596a-43b7-bd1d-ab2749a482ec 0xc005305350 0xc005305351}] [] [{kube-controller-manager Update v1 2020-09-04 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2aad706-596a-43b7-bd1d-ab2749a482ec\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 13:54:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gj4xz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gj4xz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gj4xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:54:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:54:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:54:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 13:54:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.247,StartTime:2020-09-04 13:54:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 13:54:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://3910cf3f395e62ca871189b2654eba7608ee39341c26734685171fbeed747f9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:54:25.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8517" for this suite. • [SLOW TEST:25.515 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":157,"skipped":2610,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:54:25.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-9d27bfd7-fa4c-4716-b45e-828bb419f6f3 STEP: Creating configMap with name cm-test-opt-upd-1acccdfe-e786-43a5-ae6c-0248874067db STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9d27bfd7-fa4c-4716-b45e-828bb419f6f3 STEP: Updating configmap cm-test-opt-upd-1acccdfe-e786-43a5-ae6c-0248874067db STEP: Creating configMap with name cm-test-opt-create-2a85bd90-9113-4f12-86a9-a563f3e372b4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:54:36.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4759" for this suite. • [SLOW TEST:10.339 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2628,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:54:36.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 4 13:54:40.172: INFO: Pod pod-hostip-f6b3b9c5-995f-42ea-9841-282c0e688886 has hostIP: 172.18.0.14 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:54:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-846" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2639,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:54:40.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-c72cdcf4-40ab-4efd-b0ba-490307ba28ce STEP: Creating configMap with name cm-test-opt-upd-caeede91-0dfb-4529-8742-18f045866e26 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c72cdcf4-40ab-4efd-b0ba-490307ba28ce STEP: Updating configmap cm-test-opt-upd-caeede91-0dfb-4529-8742-18f045866e26 STEP: Creating configMap with name cm-test-opt-create-1cf4f6b9-8498-4b0d-b4b5-60048ac944c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:12.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-42" for this suite. • [SLOW TEST:92.746 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:12.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:13.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9477" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":161,"skipped":2675,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:13.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 4 13:56:17.708: INFO: Successfully updated pod "labelsupdateb4140209-980b-49f2-b7de-b872f2797a31" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:19.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6514" for this suite. • [SLOW TEST:6.743 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2684,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:19.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 4 13:56:20.064: INFO: Waiting up to 5m0s for pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42" in namespace "downward-api-508" to be "Succeeded or Failed" Sep 4 13:56:20.093: INFO: Pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42": Phase="Pending", Reason="", readiness=false. Elapsed: 28.83605ms Sep 4 13:56:22.097: INFO: Pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032823287s Sep 4 13:56:24.103: INFO: Pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42": Phase="Running", Reason="", readiness=true. Elapsed: 4.038598387s Sep 4 13:56:26.112: INFO: Pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047754343s STEP: Saw pod success Sep 4 13:56:26.112: INFO: Pod "downward-api-a9684554-3482-453c-bf68-38d45bab1e42" satisfied condition "Succeeded or Failed" Sep 4 13:56:26.114: INFO: Trying to get logs from node latest-worker pod downward-api-a9684554-3482-453c-bf68-38d45bab1e42 container dapi-container: STEP: delete the pod Sep 4 13:56:26.334: INFO: Waiting for pod downward-api-a9684554-3482-453c-bf68-38d45bab1e42 to disappear Sep 4 13:56:26.423: INFO: Pod downward-api-a9684554-3482-453c-bf68-38d45bab1e42 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:26.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-508" for this suite. • [SLOW TEST:6.695 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2689,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:26.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-cdffddfe-76fd-4f58-bc53-082c5533273c STEP: Creating a pod to test consume configMaps Sep 4 13:56:26.551: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020" in namespace "projected-1389" to be "Succeeded or Failed" Sep 4 13:56:26.561: INFO: Pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040855ms Sep 4 13:56:28.565: INFO: Pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013863508s Sep 4 13:56:30.569: INFO: Pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020": Phase="Running", Reason="", readiness=true. Elapsed: 4.017720473s Sep 4 13:56:32.573: INFO: Pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021882575s STEP: Saw pod success Sep 4 13:56:32.573: INFO: Pod "pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020" satisfied condition "Succeeded or Failed" Sep 4 13:56:32.577: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020 container projected-configmap-volume-test: STEP: delete the pod Sep 4 13:56:32.616: INFO: Waiting for pod pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020 to disappear Sep 4 13:56:32.631: INFO: Pod pod-projected-configmaps-db2fc081-f92d-4120-b9e1-b039ff4a2020 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:32.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1389" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2708,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:32.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 13:56:33.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 13:56:35.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 13:56:37.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734824593, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 13:56:40.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 13:56:40.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7457-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:41.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6314" for this suite. STEP: Destroying namespace "webhook-6314-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.041 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":165,"skipped":2716,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:41.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6c65a157-112a-49a3-8c68-faa1f45fef60 STEP: Creating a pod to test consume configMaps Sep 4 13:56:41.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a" in namespace "projected-9143" to be "Succeeded or Failed" Sep 4 13:56:41.817: INFO: Pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.480133ms Sep 4 13:56:43.924: INFO: Pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16251993s Sep 4 13:56:46.140: INFO: Pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378542683s Sep 4 13:56:48.144: INFO: Pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.382146688s STEP: Saw pod success Sep 4 13:56:48.144: INFO: Pod "pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a" satisfied condition "Succeeded or Failed" Sep 4 13:56:48.146: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a container projected-configmap-volume-test: STEP: delete the pod Sep 4 13:56:48.219: INFO: Waiting for pod pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a to disappear Sep 4 13:56:48.254: INFO: Pod pod-projected-configmaps-3152ffcf-806b-46d5-aaba-a34217e3237a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:48.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9143" for this suite. • [SLOW TEST:6.582 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2727,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 4 13:56:48.386: INFO: Waiting up to 5m0s for pod "pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b" in namespace "emptydir-2235" to be "Succeeded or Failed" Sep 4 13:56:48.398: INFO: Pod "pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.076161ms Sep 4 13:56:50.402: INFO: Pod "pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016253491s Sep 4 13:56:52.406: INFO: Pod "pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02023504s STEP: Saw pod success Sep 4 13:56:52.406: INFO: Pod "pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b" satisfied condition "Succeeded or Failed" Sep 4 13:56:52.409: INFO: Trying to get logs from node latest-worker pod pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b container test-container: STEP: delete the pod Sep 4 13:56:52.480: INFO: Waiting for pod pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b to disappear Sep 4 13:56:52.517: INFO: Pod pod-940008f7-e928-4ec5-8cb1-b8066cb43b6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:56:52.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2235" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2733,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:56:52.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1615 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 4 13:56:52.603: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 4 13:56:52.731: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:56:55.044: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:56:56.734: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 4 13:56:58.734: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:57:00.734: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:57:02.735: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:57:04.735: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:57:06.735: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 4 13:57:08.735: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 4 13:57:08.743: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:57:10.747: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:57:12.747: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:57:14.747: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 4 13:57:16.747: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 4 13:57:22.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.190:8080/dial?request=hostname&protocol=udp&host=10.244.2.189&port=8081&tries=1'] Namespace:pod-network-test-1615 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:57:22.774: INFO: >>> kubeConfig: /root/.kube/config I0904 13:57:22.800518 7 log.go:181] (0xc00352e630) (0xc003bd01e0) Create stream I0904 13:57:22.800549 7 log.go:181] (0xc00352e630) (0xc003bd01e0) Stream added, broadcasting: 1 I0904 13:57:22.811151 7 log.go:181] (0xc00352e630) Reply frame received for 1 I0904 13:57:22.811189 7 log.go:181] (0xc00352e630) (0xc0036f92c0) Create stream I0904 13:57:22.811202 7 log.go:181] (0xc00352e630) (0xc0036f92c0) Stream added, broadcasting: 3 I0904 13:57:22.811899 7 log.go:181] (0xc00352e630) Reply frame received for 3 I0904 13:57:22.811925 7 log.go:181] (0xc00352e630) (0xc0033c3f40) Create stream I0904 13:57:22.811934 7 log.go:181] (0xc00352e630) (0xc0033c3f40) Stream added, broadcasting: 5 I0904 13:57:22.812577 7 log.go:181] (0xc00352e630) Reply frame received for 5 I0904 13:57:22.873155 7 log.go:181] (0xc00352e630) Data frame received for 3 I0904 13:57:22.873192 7 log.go:181] (0xc0036f92c0) (3) Data frame handling I0904 13:57:22.873214 7 log.go:181] (0xc0036f92c0) (3) Data frame sent I0904 13:57:22.873601 7 log.go:181] (0xc00352e630) Data frame received for 3 I0904 13:57:22.873631 7 log.go:181] (0xc0036f92c0) (3) Data frame handling I0904 13:57:22.873659 7 log.go:181] (0xc00352e630) Data frame received for 5 I0904 13:57:22.873676 7 log.go:181] (0xc0033c3f40) (5) Data frame handling I0904 13:57:22.875084 7 log.go:181] (0xc00352e630) Data frame received for 1 I0904 13:57:22.875111 7 log.go:181] (0xc003bd01e0) (1) Data frame handling I0904 13:57:22.875136 7 log.go:181] (0xc003bd01e0) (1) Data frame sent I0904 13:57:22.875154 7 log.go:181] (0xc00352e630) (0xc003bd01e0) Stream removed, broadcasting: 1 I0904 13:57:22.875168 7 log.go:181] (0xc00352e630) Go away received I0904 13:57:22.875268 7 log.go:181] (0xc00352e630) (0xc003bd01e0) Stream removed, broadcasting: 1 I0904 13:57:22.875288 7 log.go:181] (0xc00352e630) (0xc0036f92c0) Stream removed, broadcasting: 3 I0904 13:57:22.875310 7 log.go:181] (0xc00352e630) (0xc0033c3f40) Stream removed, broadcasting: 5 Sep 4 13:57:22.875: INFO: Waiting for responses: map[] Sep 4 13:57:22.878: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.190:8080/dial?request=hostname&protocol=udp&host=10.244.1.251&port=8081&tries=1'] Namespace:pod-network-test-1615 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 13:57:22.878: INFO: >>> kubeConfig: /root/.kube/config I0904 13:57:22.909805 7 log.go:181] (0xc006540d10) (0xc002598aa0) Create stream I0904 13:57:22.909853 7 log.go:181] (0xc006540d10) (0xc002598aa0) Stream added, broadcasting: 1 I0904 13:57:22.912350 7 log.go:181] (0xc006540d10) Reply frame received for 1 I0904 13:57:22.912390 7 log.go:181] (0xc006540d10) (0xc0036f95e0) Create stream I0904 13:57:22.912400 7 log.go:181] (0xc006540d10) (0xc0036f95e0) Stream added, broadcasting: 3 I0904 13:57:22.913489 7 log.go:181] (0xc006540d10) Reply frame received for 3 I0904 13:57:22.913519 7 log.go:181] (0xc006540d10) (0xc000ef2000) Create stream I0904 13:57:22.913527 7 log.go:181] (0xc006540d10) (0xc000ef2000) Stream added, broadcasting: 5 I0904 13:57:22.914241 7 log.go:181] (0xc006540d10) Reply frame received for 5 I0904 13:57:22.981763 7 log.go:181] (0xc006540d10) Data frame received for 3 I0904 13:57:22.981844 7 log.go:181] (0xc0036f95e0) (3) Data frame handling I0904 13:57:22.981880 7 log.go:181] (0xc0036f95e0) (3) Data frame sent I0904 13:57:22.982244 7 log.go:181] (0xc006540d10) Data frame received for 3 I0904 13:57:22.982289 7 log.go:181] (0xc0036f95e0) (3) Data frame handling I0904 13:57:22.982325 7 log.go:181] (0xc006540d10) Data frame received for 5 I0904 13:57:22.982339 7 log.go:181] (0xc000ef2000) (5) Data frame handling I0904 13:57:22.983344 7 log.go:181] (0xc006540d10) Data frame received for 1 I0904 13:57:22.983363 7 log.go:181] (0xc002598aa0) (1) Data frame handling I0904 13:57:22.983377 7 log.go:181] (0xc002598aa0) (1) Data frame sent I0904 13:57:22.983389 7 log.go:181] (0xc006540d10) (0xc002598aa0) Stream removed, broadcasting: 1 I0904 13:57:22.983404 7 log.go:181] (0xc006540d10) Go away received I0904 13:57:22.983525 7 log.go:181] (0xc006540d10) (0xc002598aa0) Stream removed, broadcasting: 1 I0904 13:57:22.983548 7 log.go:181] (0xc006540d10) (0xc0036f95e0) Stream removed, broadcasting: 3 I0904 13:57:22.983559 7 log.go:181] (0xc006540d10) (0xc000ef2000) Stream removed, broadcasting: 5 Sep 4 13:57:22.983: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 13:57:22.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1615" for this suite. • [SLOW TEST:30.467 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 13:57:22.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-eebcf8e0-d6d1-4d13-919d-56162a71b1cd in namespace container-probe-8324 Sep 4 13:57:27.119: INFO: Started pod liveness-eebcf8e0-d6d1-4d13-919d-56162a71b1cd in namespace container-probe-8324 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 13:57:27.121: INFO: Initial restart count of pod liveness-eebcf8e0-d6d1-4d13-919d-56162a71b1cd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:01:28.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8324" for this suite. • [SLOW TEST:245.314 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2763,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:01:28.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:01:30.617: INFO: Create a RollingUpdate DaemonSet Sep 4 14:01:30.621: INFO: Check that daemon pods launch on every node of the cluster Sep 4 14:01:30.883: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:30.886: INFO: Number of nodes with available pods: 0 Sep 4 14:01:30.886: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:31.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:31.896: INFO: Number of nodes with available pods: 0 Sep 4 14:01:31.896: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:33.081: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:33.085: INFO: Number of nodes with available pods: 0 Sep 4 14:01:33.085: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:33.942: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:33.946: INFO: Number of nodes with available pods: 0 Sep 4 14:01:33.946: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:34.912: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:35.002: INFO: Number of nodes with available pods: 0 Sep 4 14:01:35.002: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:35.892: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:35.896: INFO: Number of nodes with available pods: 1 Sep 4 14:01:35.896: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:01:36.898: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:36.901: INFO: Number of nodes with available pods: 2 Sep 4 14:01:36.901: INFO: Number of running nodes: 2, number of available pods: 2 Sep 4 14:01:36.901: INFO: Update the DaemonSet to trigger a rollout Sep 4 14:01:36.908: INFO: Updating DaemonSet daemon-set Sep 4 14:01:41.047: INFO: Roll back the DaemonSet before rollout is complete Sep 4 14:01:41.055: INFO: Updating DaemonSet daemon-set Sep 4 14:01:41.055: INFO: Make sure DaemonSet rollback is complete Sep 4 14:01:41.060: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:41.060: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:41.081: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:42.086: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:42.086: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:42.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:43.085: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:43.085: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:43.089: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:44.086: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:44.086: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:44.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:45.085: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:45.085: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:45.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:46.085: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:46.085: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:46.089: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:47.109: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:47.109: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:47.114: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:48.086: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:48.086: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:48.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:49.097: INFO: Wrong image for pod: daemon-set-wwcq6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 4 14:01:49.097: INFO: Pod daemon-set-wwcq6 is not available Sep 4 14:01:49.139: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:01:50.085: INFO: Pod daemon-set-r7w2q is not available Sep 4 14:01:50.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5881, will wait for the garbage collector to delete the pods Sep 4 14:01:50.154: INFO: Deleting DaemonSet.extensions daemon-set took: 5.174908ms Sep 4 14:01:50.555: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.196212ms Sep 4 14:02:00.222: INFO: Number of nodes with available pods: 0 Sep 4 14:02:00.222: INFO: Number of running nodes: 0, number of available pods: 0 Sep 4 14:02:00.225: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5881/daemonsets","resourceVersion":"6817894"},"items":null} Sep 4 14:02:00.234: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5881/pods","resourceVersion":"6817895"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:02:00.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5881" for this suite. • [SLOW TEST:31.942 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":170,"skipped":2773,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:02:00.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4395 STEP: creating service affinity-clusterip-transition in namespace services-4395 STEP: creating replication controller affinity-clusterip-transition in namespace services-4395 I0904 14:02:00.504696 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4395, replica count: 3 I0904 14:02:03.555263 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:02:06.555451 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:02:09.555712 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 14:02:09.562: INFO: Creating new exec pod Sep 4 14:02:14.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4395 execpod-affinityg5s4f -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 4 14:02:18.034: INFO: stderr: "I0904 14:02:17.954464 2236 log.go:181] (0xc0007a6000) (0xc0006e4000) Create stream\nI0904 14:02:17.954528 2236 log.go:181] (0xc0007a6000) (0xc0006e4000) Stream added, broadcasting: 1\nI0904 14:02:17.956103 2236 log.go:181] (0xc0007a6000) Reply frame received for 1\nI0904 14:02:17.956149 2236 log.go:181] (0xc0007a6000) (0xc000e8c000) Create stream\nI0904 14:02:17.956159 2236 log.go:181] (0xc0007a6000) (0xc000e8c000) Stream added, broadcasting: 3\nI0904 14:02:17.957112 2236 log.go:181] (0xc0007a6000) Reply frame received for 3\nI0904 14:02:17.957148 2236 log.go:181] (0xc0007a6000) (0xc000e8c0a0) Create stream\nI0904 14:02:17.957158 2236 log.go:181] (0xc0007a6000) (0xc000e8c0a0) Stream added, broadcasting: 5\nI0904 14:02:17.957840 2236 log.go:181] (0xc0007a6000) Reply frame received for 5\nI0904 14:02:18.025209 2236 log.go:181] (0xc0007a6000) Data frame received for 5\nI0904 14:02:18.025252 2236 log.go:181] (0xc000e8c0a0) (5) Data frame handling\nI0904 14:02:18.025265 2236 log.go:181] (0xc000e8c0a0) (5) Data frame sent\nI0904 14:02:18.025273 2236 log.go:181] (0xc0007a6000) Data frame received for 5\nI0904 14:02:18.025281 2236 log.go:181] (0xc000e8c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0904 14:02:18.025313 2236 log.go:181] (0xc0007a6000) Data frame received for 3\nI0904 14:02:18.025324 2236 log.go:181] (0xc000e8c000) (3) Data frame handling\nI0904 14:02:18.027274 2236 log.go:181] (0xc0007a6000) Data frame received for 1\nI0904 14:02:18.027302 2236 log.go:181] (0xc0006e4000) (1) Data frame handling\nI0904 14:02:18.027329 2236 log.go:181] (0xc0006e4000) (1) Data frame sent\nI0904 14:02:18.027357 2236 log.go:181] (0xc0007a6000) (0xc0006e4000) Stream removed, broadcasting: 1\nI0904 14:02:18.027386 2236 log.go:181] (0xc0007a6000) Go away received\nI0904 14:02:18.027659 2236 log.go:181] (0xc0007a6000) (0xc0006e4000) Stream removed, broadcasting: 1\nI0904 14:02:18.027690 2236 log.go:181] (0xc0007a6000) (0xc000e8c000) Stream removed, broadcasting: 3\nI0904 14:02:18.027696 2236 log.go:181] (0xc0007a6000) (0xc000e8c0a0) Stream removed, broadcasting: 5\n" Sep 4 14:02:18.034: INFO: stdout: "" Sep 4 14:02:18.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4395 execpod-affinityg5s4f -- /bin/sh -x -c nc -zv -t -w 2 10.98.173.64 80' Sep 4 14:02:18.266: INFO: stderr: "I0904 14:02:18.175716 2254 log.go:181] (0xc0009316b0) (0xc000928b40) Create stream\nI0904 14:02:18.175767 2254 log.go:181] (0xc0009316b0) (0xc000928b40) Stream added, broadcasting: 1\nI0904 14:02:18.177952 2254 log.go:181] (0xc0009316b0) Reply frame received for 1\nI0904 14:02:18.177988 2254 log.go:181] (0xc0009316b0) (0xc000928be0) Create stream\nI0904 14:02:18.178004 2254 log.go:181] (0xc0009316b0) (0xc000928be0) Stream added, broadcasting: 3\nI0904 14:02:18.178780 2254 log.go:181] (0xc0009316b0) Reply frame received for 3\nI0904 14:02:18.178807 2254 log.go:181] (0xc0009316b0) (0xc00086a1e0) Create stream\nI0904 14:02:18.178816 2254 log.go:181] (0xc0009316b0) (0xc00086a1e0) Stream added, broadcasting: 5\nI0904 14:02:18.179533 2254 log.go:181] (0xc0009316b0) Reply frame received for 5\nI0904 14:02:18.255900 2254 log.go:181] (0xc0009316b0) Data frame received for 3\nI0904 14:02:18.255923 2254 log.go:181] (0xc000928be0) (3) Data frame handling\nI0904 14:02:18.255959 2254 log.go:181] (0xc0009316b0) Data frame received for 5\nI0904 14:02:18.255980 2254 log.go:181] (0xc00086a1e0) (5) Data frame handling\nI0904 14:02:18.255992 2254 log.go:181] (0xc00086a1e0) (5) Data frame sent\nI0904 14:02:18.256000 2254 log.go:181] (0xc0009316b0) Data frame received for 5\nI0904 14:02:18.256005 2254 log.go:181] (0xc00086a1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.173.64 80\nConnection to 10.98.173.64 80 port [tcp/http] succeeded!\nI0904 14:02:18.257010 2254 log.go:181] (0xc0009316b0) Data frame received for 1\nI0904 14:02:18.257026 2254 log.go:181] (0xc000928b40) (1) Data frame handling\nI0904 14:02:18.257040 2254 log.go:181] (0xc000928b40) (1) Data frame sent\nI0904 14:02:18.257052 2254 log.go:181] (0xc0009316b0) (0xc000928b40) Stream removed, broadcasting: 1\nI0904 14:02:18.257131 2254 log.go:181] (0xc0009316b0) Go away received\nI0904 14:02:18.257320 2254 log.go:181] (0xc0009316b0) (0xc000928b40) Stream removed, broadcasting: 1\nI0904 14:02:18.257331 2254 log.go:181] (0xc0009316b0) (0xc000928be0) Stream removed, broadcasting: 3\nI0904 14:02:18.257336 2254 log.go:181] (0xc0009316b0) (0xc00086a1e0) Stream removed, broadcasting: 5\n" Sep 4 14:02:18.266: INFO: stdout: "" Sep 4 14:02:18.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4395 execpod-affinityg5s4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.173.64:80/ ; done' Sep 4 14:02:18.599: INFO: stderr: "I0904 14:02:18.423279 2272 log.go:181] (0xc000e1cf20) (0xc000a20280) Create stream\nI0904 14:02:18.423326 2272 log.go:181] (0xc000e1cf20) (0xc000a20280) Stream added, broadcasting: 1\nI0904 14:02:18.427681 2272 log.go:181] (0xc000e1cf20) Reply frame received for 1\nI0904 14:02:18.427703 2272 log.go:181] (0xc000e1cf20) (0xc000bb8000) Create stream\nI0904 14:02:18.427710 2272 log.go:181] (0xc000e1cf20) (0xc000bb8000) Stream added, broadcasting: 3\nI0904 14:02:18.428314 2272 log.go:181] (0xc000e1cf20) Reply frame received for 3\nI0904 14:02:18.428335 2272 log.go:181] (0xc000e1cf20) (0xc0000cc460) Create stream\nI0904 14:02:18.428342 2272 log.go:181] (0xc000e1cf20) (0xc0000cc460) Stream added, broadcasting: 5\nI0904 14:02:18.428997 2272 log.go:181] (0xc000e1cf20) Reply frame received for 5\nI0904 14:02:18.490536 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.490575 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.490598 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.490642 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.490657 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.490676 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.493953 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.493972 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.493990 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.494419 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.494441 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.494452 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.494468 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.494478 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.494491 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.501907 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.501937 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.501956 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.502401 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.502424 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.502437 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.502453 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.502469 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.502485 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.506943 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.506964 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.506973 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.509942 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.509961 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.509978 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.513192 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.513216 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.513239 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.514346 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.514376 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.514416 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.514606 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.514630 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.514645 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.514665 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.514677 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.514689 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.519037 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.519069 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.519107 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.519974 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.519993 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.520014 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.520035 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.520052 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.520073 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.523873 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.523898 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.523921 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.524717 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.524866 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.524886 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.524902 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.524911 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.524924 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.529438 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.529605 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.529704 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.529840 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.529866 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.529883 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.529909 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.529928 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.529954 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\nI0904 14:02:18.529972 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.529990 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.530036 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\nI0904 14:02:18.535001 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.535236 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.535306 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.535335 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.535351 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.535364 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.535439 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.535453 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.535467 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.539843 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.539862 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.539877 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.540550 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.540561 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.540576 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.540601 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.540617 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.540631 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.545583 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.545609 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.545625 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.546088 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.546109 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.546123 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.546145 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.546160 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.546180 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.551917 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.551929 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.551936 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.552617 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.552626 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.552638 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.552661 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.552680 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.552695 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.557550 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.557560 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.557567 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.558362 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.558371 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.558376 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.558382 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.558387 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.558392 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.563016 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.563063 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.563097 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.563758 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.563788 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.563800 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.563826 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.563851 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.563877 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.567858 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.567869 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.567876 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.568705 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.568718 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.568833 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.568858 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.568879 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.568897 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.574728 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.574780 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.574814 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.575024 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.575045 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.575053 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.575074 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.575096 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.575114 2272 log.go:181] (0xc0000cc460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.581760 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.581787 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.581807 2272 log.go:181] (0xc000bb8000) (3) Data frame sent\nI0904 14:02:18.582713 2272 log.go:181] (0xc000e1cf20) Data frame received for 3\nI0904 14:02:18.582753 2272 log.go:181] (0xc000bb8000) (3) Data frame handling\nI0904 14:02:18.582780 2272 log.go:181] (0xc000e1cf20) Data frame received for 5\nI0904 14:02:18.582804 2272 log.go:181] (0xc0000cc460) (5) Data frame handling\nI0904 14:02:18.584448 2272 log.go:181] (0xc000e1cf20) Data frame received for 1\nI0904 14:02:18.584481 2272 log.go:181] (0xc000a20280) (1) Data frame handling\nI0904 14:02:18.584503 2272 log.go:181] (0xc000a20280) (1) Data frame sent\nI0904 14:02:18.584538 2272 log.go:181] (0xc000e1cf20) (0xc000a20280) Stream removed, broadcasting: 1\nI0904 14:02:18.584566 2272 log.go:181] (0xc000e1cf20) Go away received\nI0904 14:02:18.585129 2272 log.go:181] (0xc000e1cf20) (0xc000a20280) Stream removed, broadcasting: 1\nI0904 14:02:18.585153 2272 log.go:181] (0xc000e1cf20) (0xc000bb8000) Stream removed, broadcasting: 3\nI0904 14:02:18.585165 2272 log.go:181] (0xc000e1cf20) (0xc0000cc460) Stream removed, broadcasting: 5\n" Sep 4 14:02:18.600: INFO: stdout: "\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp" Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.600: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4395 execpod-affinityg5s4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.173.64:80/ ; done' Sep 4 14:02:18.967: INFO: stderr: "I0904 14:02:18.759298 2290 log.go:181] (0xc000d26630) (0xc000d1e6e0) Create stream\nI0904 14:02:18.759366 2290 log.go:181] (0xc000d26630) (0xc000d1e6e0) Stream added, broadcasting: 1\nI0904 14:02:18.765263 2290 log.go:181] (0xc000d26630) Reply frame received for 1\nI0904 14:02:18.765304 2290 log.go:181] (0xc000d26630) (0xc000d1e000) Create stream\nI0904 14:02:18.765314 2290 log.go:181] (0xc000d26630) (0xc000d1e000) Stream added, broadcasting: 3\nI0904 14:02:18.766031 2290 log.go:181] (0xc000d26630) Reply frame received for 3\nI0904 14:02:18.766047 2290 log.go:181] (0xc000d26630) (0xc000d1e0a0) Create stream\nI0904 14:02:18.766053 2290 log.go:181] (0xc000d26630) (0xc000d1e0a0) Stream added, broadcasting: 5\nI0904 14:02:18.766809 2290 log.go:181] (0xc000d26630) Reply frame received for 5\nI0904 14:02:18.871777 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.871802 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.871809 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.871824 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.871829 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.871834 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.879004 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.879024 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.879030 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.879060 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.879082 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.879096 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.879858 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.879876 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.879894 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.880251 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.880268 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.880281 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.880297 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.880309 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.880316 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.883711 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.883751 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.883776 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.884145 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.884167 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.884180 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -qI0904 14:02:18.885214 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.885233 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.885242 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.885439 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.885449 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.885454 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.889123 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.889136 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.889152 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.889471 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.889486 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.889499 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.889513 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.889521 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.889529 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.892452 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.892469 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.892481 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.892843 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.892867 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.892880 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.892898 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.892909 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.892920 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.895429 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.895440 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.895448 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.895917 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.895928 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.895938 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.895953 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.895963 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.895982 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.907124 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.907135 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.907143 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.907552 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.907570 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.907599 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.907612 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.907623 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.907629 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.919399 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.919414 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.919425 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.919833 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.919856 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.919867 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.919878 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.919887 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.919893 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.928132 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.928163 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.928195 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.928506 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.928524 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.928532 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.928545 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.928556 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.928580 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.932036 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.932062 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.932086 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.935664 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.935684 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.935695 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.935722 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.935750 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.935780 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.936059 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.936135 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.936212 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.936381 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.936394 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.936402 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.936416 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.936443 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.936473 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.940018 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.940034 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.940049 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.940561 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.940573 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.940585 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.940611 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.940622 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.940631 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\nI0904 14:02:18.945238 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.945251 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.945258 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.945472 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.945483 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.945500 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.945664 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.945687 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.945715 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.950490 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.950500 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.950509 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.950888 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.950919 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.950929 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.950944 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.950953 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.950969 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.954452 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.954461 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.954466 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.954757 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.954769 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.954777 2290 log.go:181] (0xc000d1e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:18.954865 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.954876 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.954885 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.958345 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.958363 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.958436 2290 log.go:181] (0xc000d1e000) (3) Data frame sent\nI0904 14:02:18.958800 2290 log.go:181] (0xc000d26630) Data frame received for 3\nI0904 14:02:18.958815 2290 log.go:181] (0xc000d1e000) (3) Data frame handling\nI0904 14:02:18.958976 2290 log.go:181] (0xc000d26630) Data frame received for 5\nI0904 14:02:18.959000 2290 log.go:181] (0xc000d1e0a0) (5) Data frame handling\nI0904 14:02:18.960072 2290 log.go:181] (0xc000d26630) Data frame received for 1\nI0904 14:02:18.960108 2290 log.go:181] (0xc000d1e6e0) (1) Data frame handling\nI0904 14:02:18.960121 2290 log.go:181] (0xc000d1e6e0) (1) Data frame sent\nI0904 14:02:18.960156 2290 log.go:181] (0xc000d26630) (0xc000d1e6e0) Stream removed, broadcasting: 1\nI0904 14:02:18.960172 2290 log.go:181] (0xc000d26630) Go away received\nI0904 14:02:18.960395 2290 log.go:181] (0xc000d26630) (0xc000d1e6e0) Stream removed, broadcasting: 1\nI0904 14:02:18.960412 2290 log.go:181] (0xc000d26630) (0xc000d1e000) Stream removed, broadcasting: 3\nI0904 14:02:18.960424 2290 log.go:181] (0xc000d26630) (0xc000d1e0a0) Stream removed, broadcasting: 5\n" Sep 4 14:02:18.968: INFO: stdout: "\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-nzhdk\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-hkd6v\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp" Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-nzhdk Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-hkd6v Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:18.968: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:48.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4395 execpod-affinityg5s4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.173.64:80/ ; done' Sep 4 14:02:49.299: INFO: stderr: "I0904 14:02:49.124341 2308 log.go:181] (0xc0005c3130) (0xc0005ba8c0) Create stream\nI0904 14:02:49.124431 2308 log.go:181] (0xc0005c3130) (0xc0005ba8c0) Stream added, broadcasting: 1\nI0904 14:02:49.130351 2308 log.go:181] (0xc0005c3130) Reply frame received for 1\nI0904 14:02:49.130405 2308 log.go:181] (0xc0005c3130) (0xc000208f00) Create stream\nI0904 14:02:49.130417 2308 log.go:181] (0xc0005c3130) (0xc000208f00) Stream added, broadcasting: 3\nI0904 14:02:49.131356 2308 log.go:181] (0xc0005c3130) Reply frame received for 3\nI0904 14:02:49.131381 2308 log.go:181] (0xc0005c3130) (0xc0005ba000) Create stream\nI0904 14:02:49.131392 2308 log.go:181] (0xc0005c3130) (0xc0005ba000) Stream added, broadcasting: 5\nI0904 14:02:49.132399 2308 log.go:181] (0xc0005c3130) Reply frame received for 5\nI0904 14:02:49.197680 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.197705 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.197715 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ seq 0 15\nI0904 14:02:49.199410 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.199441 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.199463 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echoI0904 14:02:49.199584 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.199602 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.199630 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.199641 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.199661 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.199668 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.204237 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.204256 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.204268 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.205021 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.205044 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.205063 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.205101 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.205124 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.205146 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.208989 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.209005 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.209015 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.212291 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.212318 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.212328 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.212339 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.212344 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.212350 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.216036 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.216053 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.216105 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.216829 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.216843 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.216863 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.216885 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.216908 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.216940 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.221289 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.221301 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.221308 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.221948 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.221980 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.221997 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.222017 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.222043 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.222060 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.225447 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.225472 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.225495 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.225944 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.225970 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.225989 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.226032 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.226050 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.226069 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.229994 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.230011 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.230027 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.230663 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.230695 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.230713 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.230741 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.230755 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.230774 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.235178 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.235205 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.235222 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.235948 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.235969 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.235995 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.236014 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.236027 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.236045 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.240272 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.240297 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.240317 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.241247 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.241276 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.241305 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.241327 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.241342 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.241363 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.245625 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.245658 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.245689 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.246448 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.246473 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.246483 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.246496 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.246514 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.246532 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.251849 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.251886 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.251912 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.252640 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.252670 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.252683 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.252698 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.252708 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.252717 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.257247 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.257282 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.257315 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.258294 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.258334 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.258364 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.258386 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.258404 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.258432 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.262749 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.262783 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.262820 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.263871 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.263917 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.263936 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.263954 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.263968 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.263992 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0904 14:02:49.264010 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.264048 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.264067 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n 2 http://10.98.173.64:80/\nI0904 14:02:49.267438 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.267462 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.267491 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.268269 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.268307 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.268341 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.268381 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.268420 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.268453 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.274330 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.274350 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.274373 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.275460 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.275486 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.275509 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.275599 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.275619 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.275636 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.279618 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.279652 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.279672 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.280598 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.280649 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.280666 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.280684 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.280694 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.280704 2308 log.go:181] (0xc0005ba000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.173.64:80/\nI0904 14:02:49.284617 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.284651 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.284673 2308 log.go:181] (0xc000208f00) (3) Data frame sent\nI0904 14:02:49.286016 2308 log.go:181] (0xc0005c3130) Data frame received for 3\nI0904 14:02:49.286059 2308 log.go:181] (0xc0005c3130) Data frame received for 5\nI0904 14:02:49.286090 2308 log.go:181] (0xc0005ba000) (5) Data frame handling\nI0904 14:02:49.286134 2308 log.go:181] (0xc000208f00) (3) Data frame handling\nI0904 14:02:49.288329 2308 log.go:181] (0xc0005c3130) Data frame received for 1\nI0904 14:02:49.288347 2308 log.go:181] (0xc0005ba8c0) (1) Data frame handling\nI0904 14:02:49.288353 2308 log.go:181] (0xc0005ba8c0) (1) Data frame sent\nI0904 14:02:49.288424 2308 log.go:181] (0xc0005c3130) (0xc0005ba8c0) Stream removed, broadcasting: 1\nI0904 14:02:49.288493 2308 log.go:181] (0xc0005c3130) Go away received\nI0904 14:02:49.288816 2308 log.go:181] (0xc0005c3130) (0xc0005ba8c0) Stream removed, broadcasting: 1\nI0904 14:02:49.288841 2308 log.go:181] (0xc0005c3130) (0xc000208f00) Stream removed, broadcasting: 3\nI0904 14:02:49.288849 2308 log.go:181] (0xc0005c3130) (0xc0005ba000) Stream removed, broadcasting: 5\n" Sep 4 14:02:49.299: INFO: stdout: "\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp\naffinity-clusterip-transition-fz7wp" Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Received response from host: affinity-clusterip-transition-fz7wp Sep 4 14:02:49.300: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4395, will wait for the garbage collector to delete the pods Sep 4 14:02:49.429: INFO: Deleting ReplicationController affinity-clusterip-transition took: 23.108591ms Sep 4 14:02:50.029: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.266105ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:03:00.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4395" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:59.848 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":171,"skipped":2779,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:03:00.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 4 14:03:10.271: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:10.279: INFO: Pod pod-with-prestop-exec-hook still exists Sep 4 14:03:12.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:12.286: INFO: Pod pod-with-prestop-exec-hook still exists Sep 4 14:03:14.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:14.284: INFO: Pod pod-with-prestop-exec-hook still exists Sep 4 14:03:16.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:16.283: INFO: Pod pod-with-prestop-exec-hook still exists Sep 4 14:03:18.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:18.284: INFO: Pod pod-with-prestop-exec-hook still exists Sep 4 14:03:20.279: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 4 14:03:20.284: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:03:20.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2234" for this suite. • [SLOW TEST:20.219 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:03:20.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:03:33.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7857" for this suite. • [SLOW TEST:13.311 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":173,"skipped":2807,"failed":0} SSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:03:33.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:03:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4139" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":174,"skipped":2811,"failed":0} ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:03:33.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 4 14:03:34.034: INFO: Waiting up to 1m0s for all nodes to be ready Sep 4 14:04:34.066: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 4 14:04:34.090: INFO: Created pod: pod0-sched-preemption-low-priority Sep 4 14:04:34.130: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:04:56.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4178" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:82.545 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":175,"skipped":2811,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:04:56.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:04:57.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:04:59.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:05:01.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825097, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:05:04.623: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:05:04.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5714" for this suite. STEP: Destroying namespace "webhook-5714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.575 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":176,"skipped":2815,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:05:04.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:05:11.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1344" for this suite. • [SLOW TEST:6.129 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2833,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:05:11.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-7529b565-bd8f-43ba-be5c-8133800de2ec STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:05:17.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5786" for this suite. • [SLOW TEST:6.308 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":178,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:05:17.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 4 14:05:17.497: INFO: PodSpec: initContainers in spec.initContainers Sep 4 14:06:13.760: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-89fb0e43-a4fd-4ffd-bc10-a57417b97e9c", GenerateName:"", Namespace:"init-container-5575", SelfLink:"/api/v1/namespaces/init-container-5575/pods/pod-init-89fb0e43-a4fd-4ffd-bc10-a57417b97e9c", UID:"af3b0e3c-da28-4d33-bd8c-71f7198597da", ResourceVersion:"6819092", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734825117, loc:(*time.Location)(0x7702840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"497713648"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00362c200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00362c240)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00362c280), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00362c2c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8z72w", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006ba20c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8z72w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8z72w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8z72w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a59df8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00290ebd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a59e80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a59ea0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a59ea8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a59eac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00333f740), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825117, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825117, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825117, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825117, loc:(*time.Location)(0x7702840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.11", PodIP:"10.244.2.200", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.200"}}, StartTime:(*v1.Time)(0xc00362c300), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00290ed20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00290ed90)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://598f3beab3f1396334f9c0490e7adb7dc2b5aaf9796e8024beaf800bf76f135e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00362c380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00362c340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a59f2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:06:13.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5575" for this suite. • [SLOW TEST:56.399 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":179,"skipped":2887,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:06:13.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5f4cc980-5001-40a1-b77d-f7f136ebe274 STEP: Creating a pod to test consume secrets Sep 4 14:06:14.023: INFO: Waiting up to 5m0s for pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919" in namespace "secrets-4427" to be "Succeeded or Failed" Sep 4 14:06:14.025: INFO: Pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210478ms Sep 4 14:06:16.029: INFO: Pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006042713s Sep 4 14:06:18.032: INFO: Pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00973427s Sep 4 14:06:20.036: INFO: Pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013434232s STEP: Saw pod success Sep 4 14:06:20.036: INFO: Pod "pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919" satisfied condition "Succeeded or Failed" Sep 4 14:06:20.039: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919 container secret-volume-test: STEP: delete the pod Sep 4 14:06:20.082: INFO: Waiting for pod pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919 to disappear Sep 4 14:06:20.142: INFO: Pod pod-secrets-2ef1032d-07bd-4913-b53a-21a9fee8b919 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:06:20.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4427" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":2891,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:06:20.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:06:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2251" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":181,"skipped":2898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:06:20.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 4 14:06:21.067: INFO: Waiting up to 5m0s for pod "downward-api-3564ee21-b685-4305-843f-4303f2058291" in namespace "downward-api-6434" to be "Succeeded or Failed" Sep 4 14:06:21.082: INFO: Pod "downward-api-3564ee21-b685-4305-843f-4303f2058291": Phase="Pending", Reason="", readiness=false. Elapsed: 14.295361ms Sep 4 14:06:23.203: INFO: Pod "downward-api-3564ee21-b685-4305-843f-4303f2058291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135343018s Sep 4 14:06:25.208: INFO: Pod "downward-api-3564ee21-b685-4305-843f-4303f2058291": Phase="Running", Reason="", readiness=true. Elapsed: 4.141008916s Sep 4 14:06:27.213: INFO: Pod "downward-api-3564ee21-b685-4305-843f-4303f2058291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145477271s STEP: Saw pod success Sep 4 14:06:27.213: INFO: Pod "downward-api-3564ee21-b685-4305-843f-4303f2058291" satisfied condition "Succeeded or Failed" Sep 4 14:06:27.215: INFO: Trying to get logs from node latest-worker2 pod downward-api-3564ee21-b685-4305-843f-4303f2058291 container dapi-container: STEP: delete the pod Sep 4 14:06:27.295: INFO: Waiting for pod downward-api-3564ee21-b685-4305-843f-4303f2058291 to disappear Sep 4 14:06:27.316: INFO: Pod downward-api-3564ee21-b685-4305-843f-4303f2058291 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:06:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6434" for this suite. • [SLOW TEST:6.387 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":2937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:06:27.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1929 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-1929 Sep 4 14:06:27.470: INFO: Found 0 stateful pods, waiting for 1 Sep 4 14:06:37.475: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 14:06:37.501: INFO: Deleting all statefulset in ns statefulset-1929 Sep 4 14:06:37.580: INFO: Scaling statefulset ss to 0 Sep 4 14:07:07.658: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:07:07.660: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:07.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1929" for this suite. • [SLOW TEST:40.378 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":183,"skipped":2962,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:07.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 4 14:07:13.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4178 PodName:pod-sharedvolume-4e2515b2-4af2-48ca-936d-380e3c0cf55e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:07:13.878: INFO: >>> kubeConfig: /root/.kube/config I0904 14:07:13.930365 7 log.go:181] (0xc002df4420) (0xc0040ef4a0) Create stream I0904 14:07:13.930400 7 log.go:181] (0xc002df4420) (0xc0040ef4a0) Stream added, broadcasting: 1 I0904 14:07:13.933457 7 log.go:181] (0xc002df4420) Reply frame received for 1 I0904 14:07:13.933503 7 log.go:181] (0xc002df4420) (0xc000f7f400) Create stream I0904 14:07:13.933517 7 log.go:181] (0xc002df4420) (0xc000f7f400) Stream added, broadcasting: 3 I0904 14:07:13.934440 7 log.go:181] (0xc002df4420) Reply frame received for 3 I0904 14:07:13.934489 7 log.go:181] (0xc002df4420) (0xc000f7f540) Create stream I0904 14:07:13.934508 7 log.go:181] (0xc002df4420) (0xc000f7f540) Stream added, broadcasting: 5 I0904 14:07:13.935397 7 log.go:181] (0xc002df4420) Reply frame received for 5 I0904 14:07:14.004907 7 log.go:181] (0xc002df4420) Data frame received for 5 I0904 14:07:14.004940 7 log.go:181] (0xc000f7f540) (5) Data frame handling I0904 14:07:14.004956 7 log.go:181] (0xc002df4420) Data frame received for 3 I0904 14:07:14.004962 7 log.go:181] (0xc000f7f400) (3) Data frame handling I0904 14:07:14.004969 7 log.go:181] (0xc000f7f400) (3) Data frame sent I0904 14:07:14.004976 7 log.go:181] (0xc002df4420) Data frame received for 3 I0904 14:07:14.004980 7 log.go:181] (0xc000f7f400) (3) Data frame handling I0904 14:07:14.006146 7 log.go:181] (0xc002df4420) Data frame received for 1 I0904 14:07:14.006159 7 log.go:181] (0xc0040ef4a0) (1) Data frame handling I0904 14:07:14.006167 7 log.go:181] (0xc0040ef4a0) (1) Data frame sent I0904 14:07:14.006240 7 log.go:181] (0xc002df4420) (0xc0040ef4a0) Stream removed, broadcasting: 1 I0904 14:07:14.006312 7 log.go:181] (0xc002df4420) (0xc0040ef4a0) Stream removed, broadcasting: 1 I0904 14:07:14.006323 7 log.go:181] (0xc002df4420) (0xc000f7f400) Stream removed, broadcasting: 3 I0904 14:07:14.006452 7 log.go:181] (0xc002df4420) (0xc000f7f540) Stream removed, broadcasting: 5 I0904 14:07:14.006494 7 log.go:181] (0xc002df4420) Go away received Sep 4 14:07:14.006: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:14.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4178" for this suite. • [SLOW TEST:6.338 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":184,"skipped":2970,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:14.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:07:14.707: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:07:16.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:07:18.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825234, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:07:21.794: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:32.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2742" for this suite. STEP: Destroying namespace "webhook-2742-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":185,"skipped":2989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:32.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:07:32.980: INFO: Checking APIGroup: apiregistration.k8s.io Sep 4 14:07:32.981: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 4 14:07:32.982: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.982: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 4 14:07:32.982: INFO: Checking APIGroup: extensions Sep 4 14:07:32.983: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 4 14:07:32.983: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 4 14:07:32.983: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 4 14:07:32.983: INFO: Checking APIGroup: apps Sep 4 14:07:32.985: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 4 14:07:32.985: INFO: Versions found [{apps/v1 v1}] Sep 4 14:07:32.985: INFO: apps/v1 matches apps/v1 Sep 4 14:07:32.985: INFO: Checking APIGroup: events.k8s.io Sep 4 14:07:32.987: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 4 14:07:32.987: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.987: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 4 14:07:32.987: INFO: Checking APIGroup: authentication.k8s.io Sep 4 14:07:32.989: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 4 14:07:32.989: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.989: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 4 14:07:32.989: INFO: Checking APIGroup: authorization.k8s.io Sep 4 14:07:32.990: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 4 14:07:32.990: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.990: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 4 14:07:32.990: INFO: Checking APIGroup: autoscaling Sep 4 14:07:32.991: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 4 14:07:32.991: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 4 14:07:32.991: INFO: autoscaling/v1 matches autoscaling/v1 Sep 4 14:07:32.991: INFO: Checking APIGroup: batch Sep 4 14:07:32.992: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 4 14:07:32.992: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 4 14:07:32.992: INFO: batch/v1 matches batch/v1 Sep 4 14:07:32.992: INFO: Checking APIGroup: certificates.k8s.io Sep 4 14:07:32.993: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 4 14:07:32.993: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.993: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 4 14:07:32.993: INFO: Checking APIGroup: networking.k8s.io Sep 4 14:07:32.994: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 4 14:07:32.994: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.994: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 4 14:07:32.994: INFO: Checking APIGroup: policy Sep 4 14:07:32.994: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 4 14:07:32.994: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 4 14:07:32.995: INFO: policy/v1beta1 matches policy/v1beta1 Sep 4 14:07:32.995: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 4 14:07:32.995: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 4 14:07:32.995: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.995: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 4 14:07:32.995: INFO: Checking APIGroup: storage.k8s.io Sep 4 14:07:32.996: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 4 14:07:32.996: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.996: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 4 14:07:32.996: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 4 14:07:32.997: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 4 14:07:32.997: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.997: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 4 14:07:32.997: INFO: Checking APIGroup: apiextensions.k8s.io Sep 4 14:07:32.998: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 4 14:07:32.998: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:32.998: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 4 14:07:32.998: INFO: Checking APIGroup: scheduling.k8s.io Sep 4 14:07:33.000: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 4 14:07:33.000: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:33.000: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 4 14:07:33.000: INFO: Checking APIGroup: coordination.k8s.io Sep 4 14:07:33.001: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 4 14:07:33.001: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:33.001: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 4 14:07:33.001: INFO: Checking APIGroup: node.k8s.io Sep 4 14:07:33.002: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 4 14:07:33.002: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:33.002: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 4 14:07:33.002: INFO: Checking APIGroup: discovery.k8s.io Sep 4 14:07:33.003: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 4 14:07:33.003: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 4 14:07:33.003: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5362" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":186,"skipped":3023,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:33.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:37.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5263" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":3033,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:37.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5bb79d42-de44-461d-9a9e-2ff092bc4db0 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5bb79d42-de44-461d-9a9e-2ff092bc4db0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:43.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3502" for this suite. • [SLOW TEST:6.212 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":3043,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:43.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:07:44.847: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:07:47.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:07:49.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825264, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:07:52.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 4 14:07:52.694: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:52.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6667" for this suite. STEP: Destroying namespace "webhook-6667-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.025 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":189,"skipped":3046,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:52.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 4 14:07:52.932: INFO: Waiting up to 5m0s for pod "downward-api-d3599fd4-a026-499a-b413-851606438443" in namespace "downward-api-599" to be "Succeeded or Failed" Sep 4 14:07:52.935: INFO: Pod "downward-api-d3599fd4-a026-499a-b413-851606438443": Phase="Pending", Reason="", readiness=false. Elapsed: 3.283284ms Sep 4 14:07:54.940: INFO: Pod "downward-api-d3599fd4-a026-499a-b413-851606438443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039002s Sep 4 14:07:56.945: INFO: Pod "downward-api-d3599fd4-a026-499a-b413-851606438443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012566723s Sep 4 14:07:58.949: INFO: Pod "downward-api-d3599fd4-a026-499a-b413-851606438443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016478038s STEP: Saw pod success Sep 4 14:07:58.949: INFO: Pod "downward-api-d3599fd4-a026-499a-b413-851606438443" satisfied condition "Succeeded or Failed" Sep 4 14:07:58.951: INFO: Trying to get logs from node latest-worker pod downward-api-d3599fd4-a026-499a-b413-851606438443 container dapi-container: STEP: delete the pod Sep 4 14:07:58.969: INFO: Waiting for pod downward-api-d3599fd4-a026-499a-b413-851606438443 to disappear Sep 4 14:07:59.011: INFO: Pod downward-api-d3599fd4-a026-499a-b413-851606438443 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:07:59.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-599" for this suite. • [SLOW TEST:6.176 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:07:59.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:07:59.137: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 4 14:07:59.148: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:07:59.180: INFO: Number of nodes with available pods: 0 Sep 4 14:07:59.180: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:08:00.184: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:00.187: INFO: Number of nodes with available pods: 0 Sep 4 14:08:00.187: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:08:01.871: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:01.874: INFO: Number of nodes with available pods: 0 Sep 4 14:08:01.874: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:08:02.345: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:02.350: INFO: Number of nodes with available pods: 0 Sep 4 14:08:02.350: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:08:03.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:03.187: INFO: Number of nodes with available pods: 0 Sep 4 14:08:03.187: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:08:04.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:04.208: INFO: Number of nodes with available pods: 1 Sep 4 14:08:04.208: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:05.185: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:05.188: INFO: Number of nodes with available pods: 2 Sep 4 14:08:05.188: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 4 14:08:05.260: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:05.260: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:05.297: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:06.603: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:06.603: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:06.607: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:07.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:07.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:07.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:08.303: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:08.303: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:08.303: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:08.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:09.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:09.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:09.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:09.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:10.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:10.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:10.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:10.306: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:11.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:11.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:11.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:11.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:12.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:12.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:12.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:12.306: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:13.385: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:13.385: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:13.385: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:13.388: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:14.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:14.301: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:14.301: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:14.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:15.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:15.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:15.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:15.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:16.306: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:16.306: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:16.306: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:16.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:17.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:17.302: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:17.302: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:17.306: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:18.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:18.301: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:18.301: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:18.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:19.303: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:19.303: INFO: Wrong image for pod: daemon-set-jkk78. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:19.303: INFO: Pod daemon-set-jkk78 is not available Sep 4 14:08:19.309: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:20.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:20.301: INFO: Pod daemon-set-hg7xg is not available Sep 4 14:08:20.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:21.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:21.301: INFO: Pod daemon-set-hg7xg is not available Sep 4 14:08:21.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:22.359: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:22.359: INFO: Pod daemon-set-hg7xg is not available Sep 4 14:08:22.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:23.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:23.301: INFO: Pod daemon-set-hg7xg is not available Sep 4 14:08:23.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:24.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:24.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:25.301: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:25.304: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:26.303: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:26.303: INFO: Pod daemon-set-c88zh is not available Sep 4 14:08:26.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:27.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:27.302: INFO: Pod daemon-set-c88zh is not available Sep 4 14:08:27.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:28.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:28.302: INFO: Pod daemon-set-c88zh is not available Sep 4 14:08:28.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:29.302: INFO: Wrong image for pod: daemon-set-c88zh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 4 14:08:29.302: INFO: Pod daemon-set-c88zh is not available Sep 4 14:08:29.306: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:30.303: INFO: Pod daemon-set-qhcv7 is not available Sep 4 14:08:30.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 4 14:08:30.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:30.314: INFO: Number of nodes with available pods: 1 Sep 4 14:08:30.314: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:31.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:31.323: INFO: Number of nodes with available pods: 1 Sep 4 14:08:31.323: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:32.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:32.322: INFO: Number of nodes with available pods: 1 Sep 4 14:08:32.322: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:33.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:08:33.322: INFO: Number of nodes with available pods: 2 Sep 4 14:08:33.322: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2050, will wait for the garbage collector to delete the pods Sep 4 14:08:33.395: INFO: Deleting DaemonSet.extensions daemon-set took: 7.283544ms Sep 4 14:08:33.795: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.230261ms Sep 4 14:08:39.699: INFO: Number of nodes with available pods: 0 Sep 4 14:08:39.699: INFO: Number of running nodes: 0, number of available pods: 0 Sep 4 14:08:39.702: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2050/daemonsets","resourceVersion":"6820013"},"items":null} Sep 4 14:08:39.705: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2050/pods","resourceVersion":"6820013"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:08:39.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2050" for this suite. • [SLOW TEST:40.724 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":191,"skipped":3103,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:08:39.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-278db087-7fd1-4826-9cb7-b534e82f2a48 STEP: Creating a pod to test consume secrets Sep 4 14:08:39.828: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c" in namespace "projected-9292" to be "Succeeded or Failed" Sep 4 14:08:39.831: INFO: Pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.581674ms Sep 4 14:08:42.091: INFO: Pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26342097s Sep 4 14:08:44.095: INFO: Pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267094187s Sep 4 14:08:46.099: INFO: Pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.271193538s STEP: Saw pod success Sep 4 14:08:46.099: INFO: Pod "pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c" satisfied condition "Succeeded or Failed" Sep 4 14:08:46.102: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c container secret-volume-test: STEP: delete the pod Sep 4 14:08:46.141: INFO: Waiting for pod pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c to disappear Sep 4 14:08:46.156: INFO: Pod pod-projected-secrets-d41866da-2ea1-471a-9755-eee382c3945c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:08:46.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9292" for this suite. • [SLOW TEST:6.420 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":3107,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:08:46.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:08:46.269: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 4 14:08:46.282: INFO: Number of nodes with available pods: 0 Sep 4 14:08:46.282: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 4 14:08:46.387: INFO: Number of nodes with available pods: 0 Sep 4 14:08:46.387: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:47.391: INFO: Number of nodes with available pods: 0 Sep 4 14:08:47.391: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:48.469: INFO: Number of nodes with available pods: 0 Sep 4 14:08:48.469: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:49.391: INFO: Number of nodes with available pods: 0 Sep 4 14:08:49.391: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:50.391: INFO: Number of nodes with available pods: 1 Sep 4 14:08:50.392: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 4 14:08:50.469: INFO: Number of nodes with available pods: 1 Sep 4 14:08:50.469: INFO: Number of running nodes: 0, number of available pods: 1 Sep 4 14:08:51.471: INFO: Number of nodes with available pods: 0 Sep 4 14:08:51.471: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 4 14:08:51.547: INFO: Number of nodes with available pods: 0 Sep 4 14:08:51.547: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:52.550: INFO: Number of nodes with available pods: 0 Sep 4 14:08:52.550: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:53.550: INFO: Number of nodes with available pods: 0 Sep 4 14:08:53.550: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:54.551: INFO: Number of nodes with available pods: 0 Sep 4 14:08:54.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:55.551: INFO: Number of nodes with available pods: 0 Sep 4 14:08:55.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:56.551: INFO: Number of nodes with available pods: 0 Sep 4 14:08:56.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:57.551: INFO: Number of nodes with available pods: 0 Sep 4 14:08:57.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:58.551: INFO: Number of nodes with available pods: 0 Sep 4 14:08:58.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:08:59.550: INFO: Number of nodes with available pods: 0 Sep 4 14:08:59.550: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:09:00.551: INFO: Number of nodes with available pods: 0 Sep 4 14:09:00.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:09:01.882: INFO: Number of nodes with available pods: 0 Sep 4 14:09:01.882: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:09:02.571: INFO: Number of nodes with available pods: 0 Sep 4 14:09:02.571: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:09:03.551: INFO: Number of nodes with available pods: 0 Sep 4 14:09:03.551: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:09:04.551: INFO: Number of nodes with available pods: 1 Sep 4 14:09:04.551: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4407, will wait for the garbage collector to delete the pods Sep 4 14:09:04.617: INFO: Deleting DaemonSet.extensions daemon-set took: 8.596584ms Sep 4 14:09:05.017: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.269019ms Sep 4 14:09:19.720: INFO: Number of nodes with available pods: 0 Sep 4 14:09:19.721: INFO: Number of running nodes: 0, number of available pods: 0 Sep 4 14:09:19.724: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4407/daemonsets","resourceVersion":"6820234"},"items":null} Sep 4 14:09:19.727: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4407/pods","resourceVersion":"6820234"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:19.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4407" for this suite. • [SLOW TEST:33.636 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":193,"skipped":3107,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:19.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 4 14:09:19.989: INFO: Waiting up to 5m0s for pod "pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42" in namespace "emptydir-10" to be "Succeeded or Failed" Sep 4 14:09:19.993: INFO: Pod "pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761046ms Sep 4 14:09:21.997: INFO: Pod "pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007935668s Sep 4 14:09:24.001: INFO: Pod "pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01192179s STEP: Saw pod success Sep 4 14:09:24.001: INFO: Pod "pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42" satisfied condition "Succeeded or Failed" Sep 4 14:09:24.004: INFO: Trying to get logs from node latest-worker pod pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42 container test-container: STEP: delete the pod Sep 4 14:09:24.044: INFO: Waiting for pod pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42 to disappear Sep 4 14:09:24.061: INFO: Pod pod-384eac0c-da7b-4d1d-abe7-a87d31e35e42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:24.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-10" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":3123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:24.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:09:24.600: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:09:26.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:09:28.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825364, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:09:31.694: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:31.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-138" for this suite. STEP: Destroying namespace "webhook-138-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.831 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":195,"skipped":3152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:31.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-4aedb0e8-7825-4529-bfc4-71f771bae2db STEP: Creating a pod to test consume configMaps Sep 4 14:09:32.032: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031" in namespace "projected-9687" to be "Succeeded or Failed" Sep 4 14:09:32.035: INFO: Pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914141ms Sep 4 14:09:34.039: INFO: Pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007183368s Sep 4 14:09:36.043: INFO: Pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011383929s Sep 4 14:09:38.057: INFO: Pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025620085s STEP: Saw pod success Sep 4 14:09:38.057: INFO: Pod "pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031" satisfied condition "Succeeded or Failed" Sep 4 14:09:38.060: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031 container projected-configmap-volume-test: STEP: delete the pod Sep 4 14:09:38.137: INFO: Waiting for pod pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031 to disappear Sep 4 14:09:38.146: INFO: Pod pod-projected-configmaps-84b27009-3045-434a-b59b-2d462f11e031 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:38.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9687" for this suite. • [SLOW TEST:6.253 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2742c339-9ea2-4737-94fc-3398c2f4694b STEP: Creating a pod to test consume secrets Sep 4 14:09:38.346: INFO: Waiting up to 5m0s for pod "pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866" in namespace "secrets-1753" to be "Succeeded or Failed" Sep 4 14:09:38.363: INFO: Pod "pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866": Phase="Pending", Reason="", readiness=false. Elapsed: 16.632557ms Sep 4 14:09:40.367: INFO: Pod "pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020568112s Sep 4 14:09:42.370: INFO: Pod "pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024213143s STEP: Saw pod success Sep 4 14:09:42.370: INFO: Pod "pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866" satisfied condition "Succeeded or Failed" Sep 4 14:09:42.372: INFO: Trying to get logs from node latest-worker pod pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866 container secret-volume-test: STEP: delete the pod Sep 4 14:09:42.452: INFO: Waiting for pod pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866 to disappear Sep 4 14:09:42.464: INFO: Pod pod-secrets-9c6bb4b2-0d67-48f4-bc0d-563f7360d866 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:42.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1753" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:42.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-0cb48757-e5a8-4503-8eb6-f4fa7804f85c STEP: Creating a pod to test consume configMaps Sep 4 14:09:42.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25" in namespace "configmap-4042" to be "Succeeded or Failed" Sep 4 14:09:42.971: INFO: Pod "pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25": Phase="Pending", Reason="", readiness=false. Elapsed: 83.06336ms Sep 4 14:09:44.975: INFO: Pod "pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087322831s Sep 4 14:09:46.979: INFO: Pod "pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09131333s STEP: Saw pod success Sep 4 14:09:46.979: INFO: Pod "pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25" satisfied condition "Succeeded or Failed" Sep 4 14:09:46.982: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25 container configmap-volume-test: STEP: delete the pod Sep 4 14:09:47.268: INFO: Waiting for pod pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25 to disappear Sep 4 14:09:47.288: INFO: Pod pod-configmaps-3be9f06e-7957-482b-9637-f7bc9a103e25 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:47.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4042" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3254,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:47.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:09:47.530: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:09:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3938" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3264,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:09:51.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f in namespace container-probe-6187 Sep 4 14:09:55.798: INFO: Started pod liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f in namespace container-probe-6187 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 14:09:55.800: INFO: Initial restart count of pod liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is 0 Sep 4 14:10:16.359: INFO: Restart count of pod container-probe-6187/liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is now 1 (20.558243937s elapsed) Sep 4 14:10:36.416: INFO: Restart count of pod container-probe-6187/liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is now 2 (40.616170745s elapsed) Sep 4 14:10:56.455: INFO: Restart count of pod container-probe-6187/liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is now 3 (1m0.654619669s elapsed) Sep 4 14:11:16.587: INFO: Restart count of pod container-probe-6187/liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is now 4 (1m20.78641992s elapsed) Sep 4 14:12:26.931: INFO: Restart count of pod container-probe-6187/liveness-4c4c6f65-10e0-4687-b307-f7ed6c61e76f is now 5 (2m31.130235966s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:12:26.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6187" for this suite. • [SLOW TEST:155.364 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3268,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:12:26.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 4 14:12:27.587: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 4 14:12:32.591: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:12:33.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4672" for this suite. • [SLOW TEST:6.623 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":201,"skipped":3281,"failed":0} [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:12:33.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 4 14:12:33.818: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 4 14:12:33.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:38.635: INFO: stderr: "" Sep 4 14:12:38.635: INFO: stdout: "service/agnhost-replica created\n" Sep 4 14:12:38.635: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 4 14:12:38.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:40.055: INFO: stderr: "" Sep 4 14:12:40.055: INFO: stdout: "service/agnhost-primary created\n" Sep 4 14:12:40.055: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 4 14:12:40.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:40.779: INFO: stderr: "" Sep 4 14:12:40.780: INFO: stdout: "service/frontend created\n" Sep 4 14:12:40.780: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 4 14:12:40.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:41.116: INFO: stderr: "" Sep 4 14:12:41.116: INFO: stdout: "deployment.apps/frontend created\n" Sep 4 14:12:41.116: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 4 14:12:41.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:41.500: INFO: stderr: "" Sep 4 14:12:41.500: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 4 14:12:41.500: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 4 14:12:41.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6687' Sep 4 14:12:41.816: INFO: stderr: "" Sep 4 14:12:41.816: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 4 14:12:41.816: INFO: Waiting for all frontend pods to be Running. Sep 4 14:12:51.866: INFO: Waiting for frontend to serve content. Sep 4 14:12:51.876: INFO: Trying to add a new entry to the guestbook. Sep 4 14:12:51.888: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 4 14:12:51.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:52.120: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:52.120: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 4 14:12:52.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:52.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:52.351: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 4 14:12:52.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:52.519: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:52.519: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 4 14:12:52.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:52.625: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:52.625: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 4 14:12:52.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:52.765: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:52.765: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 4 14:12:52.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6687' Sep 4 14:12:53.345: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:12:53.345: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:12:53.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6687" for this suite. • [SLOW TEST:19.818 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":202,"skipped":3281,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:12:53.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-31df26a6-5fef-414a-8d9a-a5e0717b358c in namespace container-probe-5428 Sep 4 14:13:03.057: INFO: Started pod liveness-31df26a6-5fef-414a-8d9a-a5e0717b358c in namespace container-probe-5428 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 14:13:03.059: INFO: Initial restart count of pod liveness-31df26a6-5fef-414a-8d9a-a5e0717b358c is 0 Sep 4 14:13:23.173: INFO: Restart count of pod container-probe-5428/liveness-31df26a6-5fef-414a-8d9a-a5e0717b358c is now 1 (20.113402066s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:13:23.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5428" for this suite. • [SLOW TEST:29.827 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3282,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:13:23.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:13:23.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c" in namespace "projected-8535" to be "Succeeded or Failed" Sep 4 14:13:23.709: INFO: Pod "downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 106.305952ms Sep 4 14:13:25.714: INFO: Pod "downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1106571s Sep 4 14:13:27.724: INFO: Pod "downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120674184s STEP: Saw pod success Sep 4 14:13:27.724: INFO: Pod "downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c" satisfied condition "Succeeded or Failed" Sep 4 14:13:27.726: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c container client-container: STEP: delete the pod Sep 4 14:13:27.840: INFO: Waiting for pod downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c to disappear Sep 4 14:13:27.848: INFO: Pod downwardapi-volume-de0ec475-aeab-45bd-9bd2-1b112e652a3c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:13:27.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8535" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3282,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:13:27.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:13:32.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9315" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:13:32.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:14:03.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6686" for this suite. STEP: Destroying namespace "nsdeletetest-4255" for this suite. Sep 4 14:14:03.966: INFO: Namespace nsdeletetest-4255 was already deleted STEP: Destroying namespace "nsdeletetest-2073" for this suite. • [SLOW TEST:31.650 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":206,"skipped":3306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:14:03.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3186 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3186 I0904 14:14:04.176365 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3186, replica count: 2 I0904 14:14:07.226760 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:14:10.227014 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:14:13.227269 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 14:14:13.227: INFO: Creating new exec pod Sep 4 14:14:20.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3186 execpodn97l7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 4 14:14:20.503: INFO: stderr: "I0904 14:14:20.416647 2537 log.go:181] (0xc0000bbad0) (0xc0005b08c0) Create stream\nI0904 14:14:20.416841 2537 log.go:181] (0xc0000bbad0) (0xc0005b08c0) Stream added, broadcasting: 1\nI0904 14:14:20.419728 2537 log.go:181] (0xc0000bbad0) Reply frame received for 1\nI0904 14:14:20.419760 2537 log.go:181] (0xc0000bbad0) (0xc000ba43c0) Create stream\nI0904 14:14:20.419769 2537 log.go:181] (0xc0000bbad0) (0xc000ba43c0) Stream added, broadcasting: 3\nI0904 14:14:20.420614 2537 log.go:181] (0xc0000bbad0) Reply frame received for 3\nI0904 14:14:20.420637 2537 log.go:181] (0xc0000bbad0) (0xc0007283c0) Create stream\nI0904 14:14:20.420644 2537 log.go:181] (0xc0000bbad0) (0xc0007283c0) Stream added, broadcasting: 5\nI0904 14:14:20.421652 2537 log.go:181] (0xc0000bbad0) Reply frame received for 5\nI0904 14:14:20.491671 2537 log.go:181] (0xc0000bbad0) Data frame received for 5\nI0904 14:14:20.491697 2537 log.go:181] (0xc0007283c0) (5) Data frame handling\nI0904 14:14:20.491709 2537 log.go:181] (0xc0007283c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0904 14:14:20.492257 2537 log.go:181] (0xc0000bbad0) Data frame received for 5\nI0904 14:14:20.492278 2537 log.go:181] (0xc0007283c0) (5) Data frame handling\nI0904 14:14:20.492299 2537 log.go:181] (0xc0007283c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0904 14:14:20.492656 2537 log.go:181] (0xc0000bbad0) Data frame received for 3\nI0904 14:14:20.492681 2537 log.go:181] (0xc000ba43c0) (3) Data frame handling\nI0904 14:14:20.492694 2537 log.go:181] (0xc0000bbad0) Data frame received for 5\nI0904 14:14:20.492710 2537 log.go:181] (0xc0007283c0) (5) Data frame handling\nI0904 14:14:20.494895 2537 log.go:181] (0xc0000bbad0) Data frame received for 1\nI0904 14:14:20.494915 2537 log.go:181] (0xc0005b08c0) (1) Data frame handling\nI0904 14:14:20.494927 2537 log.go:181] (0xc0005b08c0) (1) Data frame sent\nI0904 14:14:20.494943 2537 log.go:181] (0xc0000bbad0) (0xc0005b08c0) Stream removed, broadcasting: 1\nI0904 14:14:20.494956 2537 log.go:181] (0xc0000bbad0) Go away received\nI0904 14:14:20.495415 2537 log.go:181] (0xc0000bbad0) (0xc0005b08c0) Stream removed, broadcasting: 1\nI0904 14:14:20.495431 2537 log.go:181] (0xc0000bbad0) (0xc000ba43c0) Stream removed, broadcasting: 3\nI0904 14:14:20.495438 2537 log.go:181] (0xc0000bbad0) (0xc0007283c0) Stream removed, broadcasting: 5\n" Sep 4 14:14:20.503: INFO: stdout: "" Sep 4 14:14:20.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3186 execpodn97l7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.36.227 80' Sep 4 14:14:20.714: INFO: stderr: "I0904 14:14:20.623385 2555 log.go:181] (0xc000248a50) (0xc000658460) Create stream\nI0904 14:14:20.623451 2555 log.go:181] (0xc000248a50) (0xc000658460) Stream added, broadcasting: 1\nI0904 14:14:20.625507 2555 log.go:181] (0xc000248a50) Reply frame received for 1\nI0904 14:14:20.625542 2555 log.go:181] (0xc000248a50) (0xc0006c0000) Create stream\nI0904 14:14:20.625553 2555 log.go:181] (0xc000248a50) (0xc0006c0000) Stream added, broadcasting: 3\nI0904 14:14:20.626319 2555 log.go:181] (0xc000248a50) Reply frame received for 3\nI0904 14:14:20.626371 2555 log.go:181] (0xc000248a50) (0xc0006c00a0) Create stream\nI0904 14:14:20.626399 2555 log.go:181] (0xc000248a50) (0xc0006c00a0) Stream added, broadcasting: 5\nI0904 14:14:20.627181 2555 log.go:181] (0xc000248a50) Reply frame received for 5\nI0904 14:14:20.705300 2555 log.go:181] (0xc000248a50) Data frame received for 5\nI0904 14:14:20.705326 2555 log.go:181] (0xc0006c00a0) (5) Data frame handling\nI0904 14:14:20.705337 2555 log.go:181] (0xc0006c00a0) (5) Data frame sent\nI0904 14:14:20.705342 2555 log.go:181] (0xc000248a50) Data frame received for 5\nI0904 14:14:20.705346 2555 log.go:181] (0xc0006c00a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.36.227 80\nConnection to 10.96.36.227 80 port [tcp/http] succeeded!\nI0904 14:14:20.705494 2555 log.go:181] (0xc000248a50) Data frame received for 3\nI0904 14:14:20.705520 2555 log.go:181] (0xc0006c0000) (3) Data frame handling\nI0904 14:14:20.707468 2555 log.go:181] (0xc000248a50) Data frame received for 1\nI0904 14:14:20.707485 2555 log.go:181] (0xc000658460) (1) Data frame handling\nI0904 14:14:20.707495 2555 log.go:181] (0xc000658460) (1) Data frame sent\nI0904 14:14:20.707505 2555 log.go:181] (0xc000248a50) (0xc000658460) Stream removed, broadcasting: 1\nI0904 14:14:20.707821 2555 log.go:181] (0xc000248a50) (0xc000658460) Stream removed, broadcasting: 1\nI0904 14:14:20.707837 2555 log.go:181] (0xc000248a50) (0xc0006c0000) Stream removed, broadcasting: 3\nI0904 14:14:20.707844 2555 log.go:181] (0xc000248a50) (0xc0006c00a0) Stream removed, broadcasting: 5\n" Sep 4 14:14:20.714: INFO: stdout: "" Sep 4 14:14:20.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3186 execpodn97l7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30715' Sep 4 14:14:20.982: INFO: stderr: "I0904 14:14:20.881596 2570 log.go:181] (0xc0007289a0) (0xc000720460) Create stream\nI0904 14:14:20.881805 2570 log.go:181] (0xc0007289a0) (0xc000720460) Stream added, broadcasting: 1\nI0904 14:14:20.885927 2570 log.go:181] (0xc0007289a0) Reply frame received for 1\nI0904 14:14:20.885982 2570 log.go:181] (0xc0007289a0) (0xc000720000) Create stream\nI0904 14:14:20.886001 2570 log.go:181] (0xc0007289a0) (0xc000720000) Stream added, broadcasting: 3\nI0904 14:14:20.886853 2570 log.go:181] (0xc0007289a0) Reply frame received for 3\nI0904 14:14:20.886886 2570 log.go:181] (0xc0007289a0) (0xc00031c0a0) Create stream\nI0904 14:14:20.886901 2570 log.go:181] (0xc0007289a0) (0xc00031c0a0) Stream added, broadcasting: 5\nI0904 14:14:20.887659 2570 log.go:181] (0xc0007289a0) Reply frame received for 5\nI0904 14:14:20.967707 2570 log.go:181] (0xc0007289a0) Data frame received for 5\nI0904 14:14:20.967766 2570 log.go:181] (0xc00031c0a0) (5) Data frame handling\nI0904 14:14:20.967788 2570 log.go:181] (0xc00031c0a0) (5) Data frame sent\nI0904 14:14:20.967801 2570 log.go:181] (0xc0007289a0) Data frame received for 5\nI0904 14:14:20.967812 2570 log.go:181] (0xc00031c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30715\nConnection to 172.18.0.11 30715 port [tcp/30715] succeeded!\nI0904 14:14:20.967840 2570 log.go:181] (0xc0007289a0) Data frame received for 3\nI0904 14:14:20.967852 2570 log.go:181] (0xc000720000) (3) Data frame handling\nI0904 14:14:20.975335 2570 log.go:181] (0xc0007289a0) Data frame received for 1\nI0904 14:14:20.975369 2570 log.go:181] (0xc000720460) (1) Data frame handling\nI0904 14:14:20.975386 2570 log.go:181] (0xc000720460) (1) Data frame sent\nI0904 14:14:20.975401 2570 log.go:181] (0xc0007289a0) (0xc000720460) Stream removed, broadcasting: 1\nI0904 14:14:20.975421 2570 log.go:181] (0xc0007289a0) Go away received\nI0904 14:14:20.975800 2570 log.go:181] (0xc0007289a0) (0xc000720460) Stream removed, broadcasting: 1\nI0904 14:14:20.975822 2570 log.go:181] (0xc0007289a0) (0xc000720000) Stream removed, broadcasting: 3\nI0904 14:14:20.975832 2570 log.go:181] (0xc0007289a0) (0xc00031c0a0) Stream removed, broadcasting: 5\n" Sep 4 14:14:20.983: INFO: stdout: "" Sep 4 14:14:20.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3186 execpodn97l7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30715' Sep 4 14:14:21.191: INFO: stderr: "I0904 14:14:21.111699 2588 log.go:181] (0xc00003a2c0) (0xc0009da1e0) Create stream\nI0904 14:14:21.111753 2588 log.go:181] (0xc00003a2c0) (0xc0009da1e0) Stream added, broadcasting: 1\nI0904 14:14:21.113767 2588 log.go:181] (0xc00003a2c0) Reply frame received for 1\nI0904 14:14:21.113812 2588 log.go:181] (0xc00003a2c0) (0xc0006306e0) Create stream\nI0904 14:14:21.113825 2588 log.go:181] (0xc00003a2c0) (0xc0006306e0) Stream added, broadcasting: 3\nI0904 14:14:21.114719 2588 log.go:181] (0xc00003a2c0) Reply frame received for 3\nI0904 14:14:21.114758 2588 log.go:181] (0xc00003a2c0) (0xc000630780) Create stream\nI0904 14:14:21.114768 2588 log.go:181] (0xc00003a2c0) (0xc000630780) Stream added, broadcasting: 5\nI0904 14:14:21.115721 2588 log.go:181] (0xc00003a2c0) Reply frame received for 5\nI0904 14:14:21.180914 2588 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0904 14:14:21.180951 2588 log.go:181] (0xc000630780) (5) Data frame handling\nI0904 14:14:21.180970 2588 log.go:181] (0xc000630780) (5) Data frame sent\nI0904 14:14:21.180987 2588 log.go:181] (0xc00003a2c0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.14 30715\nConnection to 172.18.0.14 30715 port [tcp/30715] succeeded!\nI0904 14:14:21.180996 2588 log.go:181] (0xc000630780) (5) Data frame handling\nI0904 14:14:21.181330 2588 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0904 14:14:21.181343 2588 log.go:181] (0xc0006306e0) (3) Data frame handling\nI0904 14:14:21.182724 2588 log.go:181] (0xc00003a2c0) Data frame received for 1\nI0904 14:14:21.182748 2588 log.go:181] (0xc0009da1e0) (1) Data frame handling\nI0904 14:14:21.182759 2588 log.go:181] (0xc0009da1e0) (1) Data frame sent\nI0904 14:14:21.182778 2588 log.go:181] (0xc00003a2c0) (0xc0009da1e0) Stream removed, broadcasting: 1\nI0904 14:14:21.182795 2588 log.go:181] (0xc00003a2c0) Go away received\nI0904 14:14:21.183184 2588 log.go:181] (0xc00003a2c0) (0xc0009da1e0) Stream removed, broadcasting: 1\nI0904 14:14:21.183204 2588 log.go:181] (0xc00003a2c0) (0xc0006306e0) Stream removed, broadcasting: 3\nI0904 14:14:21.183214 2588 log.go:181] (0xc00003a2c0) (0xc000630780) Stream removed, broadcasting: 5\n" Sep 4 14:14:21.191: INFO: stdout: "" Sep 4 14:14:21.191: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:14:21.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3186" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.278 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":207,"skipped":3465,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:14:21.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:14:22.058: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:14:24.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825662, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825662, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825662, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825662, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:14:27.362: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:14:28.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5035" for this suite. STEP: Destroying namespace "webhook-5035-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.251 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":208,"skipped":3469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:14:29.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 4 14:14:30.225: INFO: Waiting up to 5m0s for pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe" in namespace "containers-4926" to be "Succeeded or Failed" Sep 4 14:14:30.481: INFO: Pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe": Phase="Pending", Reason="", readiness=false. Elapsed: 255.930202ms Sep 4 14:14:32.487: INFO: Pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261248747s Sep 4 14:14:34.516: INFO: Pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290609836s Sep 4 14:14:36.520: INFO: Pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294671732s STEP: Saw pod success Sep 4 14:14:36.520: INFO: Pod "client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe" satisfied condition "Succeeded or Failed" Sep 4 14:14:36.524: INFO: Trying to get logs from node latest-worker2 pod client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe container test-container: STEP: delete the pod Sep 4 14:14:36.571: INFO: Waiting for pod client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe to disappear Sep 4 14:14:36.579: INFO: Pod client-containers-0dd59ea7-1413-4a9b-b6c9-9103546912fe no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:14:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4926" for this suite. • [SLOW TEST:7.085 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3501,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:14:36.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:14:37.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:14:39.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:14:41.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825677, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:14:44.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:14:44.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8079" for this suite. STEP: Destroying namespace "webhook-8079-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.519 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":210,"skipped":3501,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:14:45.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 4 14:14:45.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-153' Sep 4 14:14:45.373: INFO: stderr: "" Sep 4 14:14:45.373: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 4 14:14:45.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-153' Sep 4 14:14:45.719: INFO: stderr: "" Sep 4 14:14:45.719: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-04T14:14:45Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-04T14:14:45Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-153\",\n \"resourceVersion\": \"6822057\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-153/pods/e2e-test-httpd-pod\",\n \"uid\": \"22dd9968-de88-41d1-8e9e-485348d96f1b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wtvdt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wtvdt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wtvdt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-04T14:14:45Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Sep 4 14:14:45.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-153' Sep 4 14:14:46.139: INFO: stderr: "W0904 14:14:45.789834 2636 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 4 14:14:46.139: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 4 14:14:46.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-153' Sep 4 14:15:00.010: INFO: stderr: "" Sep 4 14:15:00.010: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:15:00.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-153" for this suite. • [SLOW TEST:14.924 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":211,"skipped":3504,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:15:00.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:15:22.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1704" for this suite. • [SLOW TEST:22.319 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":212,"skipped":3507,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:15:22.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 4 14:15:22.544: INFO: Waiting up to 5m0s for pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b" in namespace "emptydir-1972" to be "Succeeded or Failed" Sep 4 14:15:22.547: INFO: Pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.209753ms Sep 4 14:15:24.714: INFO: Pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170189018s Sep 4 14:15:26.718: INFO: Pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173811753s Sep 4 14:15:28.722: INFO: Pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177850288s STEP: Saw pod success Sep 4 14:15:28.722: INFO: Pod "pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b" satisfied condition "Succeeded or Failed" Sep 4 14:15:28.725: INFO: Trying to get logs from node latest-worker pod pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b container test-container: STEP: delete the pod Sep 4 14:15:28.835: INFO: Waiting for pod pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b to disappear Sep 4 14:15:28.864: INFO: Pod pod-ec41e92f-230c-4ba1-9917-9621cdf3e91b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:15:28.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1972" for this suite. • [SLOW TEST:6.537 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":213,"skipped":3511,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:15:28.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-630 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 4 14:15:29.211: INFO: Found 0 stateful pods, waiting for 3 Sep 4 14:15:39.615: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:15:39.615: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:15:39.615: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 4 14:15:49.215: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:15:49.215: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:15:49.215: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:15:49.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-630 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:15:49.491: INFO: stderr: "I0904 14:15:49.371119 2669 log.go:181] (0xc00003b8c0) (0xc00055fb80) Create stream\nI0904 14:15:49.371196 2669 log.go:181] (0xc00003b8c0) (0xc00055fb80) Stream added, broadcasting: 1\nI0904 14:15:49.373337 2669 log.go:181] (0xc00003b8c0) Reply frame received for 1\nI0904 14:15:49.373372 2669 log.go:181] (0xc00003b8c0) (0xc0008181e0) Create stream\nI0904 14:15:49.373384 2669 log.go:181] (0xc00003b8c0) (0xc0008181e0) Stream added, broadcasting: 3\nI0904 14:15:49.374423 2669 log.go:181] (0xc00003b8c0) Reply frame received for 3\nI0904 14:15:49.374470 2669 log.go:181] (0xc00003b8c0) (0xc000818280) Create stream\nI0904 14:15:49.374490 2669 log.go:181] (0xc00003b8c0) (0xc000818280) Stream added, broadcasting: 5\nI0904 14:15:49.375311 2669 log.go:181] (0xc00003b8c0) Reply frame received for 5\nI0904 14:15:49.453690 2669 log.go:181] (0xc00003b8c0) Data frame received for 5\nI0904 14:15:49.453712 2669 log.go:181] (0xc000818280) (5) Data frame handling\nI0904 14:15:49.453724 2669 log.go:181] (0xc000818280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:15:49.483069 2669 log.go:181] (0xc00003b8c0) Data frame received for 3\nI0904 14:15:49.483110 2669 log.go:181] (0xc0008181e0) (3) Data frame handling\nI0904 14:15:49.483118 2669 log.go:181] (0xc0008181e0) (3) Data frame sent\nI0904 14:15:49.483126 2669 log.go:181] (0xc00003b8c0) Data frame received for 3\nI0904 14:15:49.483135 2669 log.go:181] (0xc0008181e0) (3) Data frame handling\nI0904 14:15:49.483165 2669 log.go:181] (0xc00003b8c0) Data frame received for 5\nI0904 14:15:49.483171 2669 log.go:181] (0xc000818280) (5) Data frame handling\nI0904 14:15:49.484334 2669 log.go:181] (0xc00003b8c0) Data frame received for 1\nI0904 14:15:49.484363 2669 log.go:181] (0xc00055fb80) (1) Data frame handling\nI0904 14:15:49.484378 2669 log.go:181] (0xc00055fb80) (1) Data frame sent\nI0904 14:15:49.484390 2669 log.go:181] (0xc00003b8c0) (0xc00055fb80) Stream removed, broadcasting: 1\nI0904 14:15:49.484406 2669 log.go:181] (0xc00003b8c0) Go away received\nI0904 14:15:49.484861 2669 log.go:181] (0xc00003b8c0) (0xc00055fb80) Stream removed, broadcasting: 1\nI0904 14:15:49.484879 2669 log.go:181] (0xc00003b8c0) (0xc0008181e0) Stream removed, broadcasting: 3\nI0904 14:15:49.484886 2669 log.go:181] (0xc00003b8c0) (0xc000818280) Stream removed, broadcasting: 5\n" Sep 4 14:15:49.491: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:15:49.491: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 4 14:15:59.556: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 4 14:16:09.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-630 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:16:09.811: INFO: stderr: "I0904 14:16:09.719776 2687 log.go:181] (0xc000d2adc0) (0xc00048c1e0) Create stream\nI0904 14:16:09.719819 2687 log.go:181] (0xc000d2adc0) (0xc00048c1e0) Stream added, broadcasting: 1\nI0904 14:16:09.724067 2687 log.go:181] (0xc000d2adc0) Reply frame received for 1\nI0904 14:16:09.724106 2687 log.go:181] (0xc000d2adc0) (0xc000ade000) Create stream\nI0904 14:16:09.724118 2687 log.go:181] (0xc000d2adc0) (0xc000ade000) Stream added, broadcasting: 3\nI0904 14:16:09.724935 2687 log.go:181] (0xc000d2adc0) Reply frame received for 3\nI0904 14:16:09.724959 2687 log.go:181] (0xc000d2adc0) (0xc000ade140) Create stream\nI0904 14:16:09.724966 2687 log.go:181] (0xc000d2adc0) (0xc000ade140) Stream added, broadcasting: 5\nI0904 14:16:09.725660 2687 log.go:181] (0xc000d2adc0) Reply frame received for 5\nI0904 14:16:09.797941 2687 log.go:181] (0xc000d2adc0) Data frame received for 3\nI0904 14:16:09.797968 2687 log.go:181] (0xc000ade000) (3) Data frame handling\nI0904 14:16:09.797991 2687 log.go:181] (0xc000ade000) (3) Data frame sent\nI0904 14:16:09.798084 2687 log.go:181] (0xc000d2adc0) Data frame received for 5\nI0904 14:16:09.798106 2687 log.go:181] (0xc000ade140) (5) Data frame handling\nI0904 14:16:09.798114 2687 log.go:181] (0xc000ade140) (5) Data frame sent\nI0904 14:16:09.798120 2687 log.go:181] (0xc000d2adc0) Data frame received for 5\nI0904 14:16:09.798124 2687 log.go:181] (0xc000ade140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:16:09.798140 2687 log.go:181] (0xc000d2adc0) Data frame received for 3\nI0904 14:16:09.798162 2687 log.go:181] (0xc000ade000) (3) Data frame handling\nI0904 14:16:09.799124 2687 log.go:181] (0xc000d2adc0) Data frame received for 1\nI0904 14:16:09.799144 2687 log.go:181] (0xc00048c1e0) (1) Data frame handling\nI0904 14:16:09.799156 2687 log.go:181] (0xc00048c1e0) (1) Data frame sent\nI0904 14:16:09.799174 2687 log.go:181] (0xc000d2adc0) (0xc00048c1e0) Stream removed, broadcasting: 1\nI0904 14:16:09.799201 2687 log.go:181] (0xc000d2adc0) Go away received\nI0904 14:16:09.799531 2687 log.go:181] (0xc000d2adc0) (0xc00048c1e0) Stream removed, broadcasting: 1\nI0904 14:16:09.799542 2687 log.go:181] (0xc000d2adc0) (0xc000ade000) Stream removed, broadcasting: 3\nI0904 14:16:09.799547 2687 log.go:181] (0xc000d2adc0) (0xc000ade140) Stream removed, broadcasting: 5\n" Sep 4 14:16:09.811: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:16:09.811: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:16:19.991: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:16:19.991: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:19.991: INFO: Waiting for Pod statefulset-630/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:30.231: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:16:30.231: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:30.231: INFO: Waiting for Pod statefulset-630/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:41.011: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:16:41.011: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:41.011: INFO: Waiting for Pod statefulset-630/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:16:49.999: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:16:49.999: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:17:00.038: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update STEP: Rolling back to a previous revision Sep 4 14:17:09.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-630 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:17:10.283: INFO: stderr: "I0904 14:17:10.132130 2705 log.go:181] (0xc00003a0b0) (0xc000730000) Create stream\nI0904 14:17:10.132245 2705 log.go:181] (0xc00003a0b0) (0xc000730000) Stream added, broadcasting: 1\nI0904 14:17:10.134923 2705 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0904 14:17:10.134980 2705 log.go:181] (0xc00003a0b0) (0xc0005a6140) Create stream\nI0904 14:17:10.134995 2705 log.go:181] (0xc00003a0b0) (0xc0005a6140) Stream added, broadcasting: 3\nI0904 14:17:10.135942 2705 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0904 14:17:10.135992 2705 log.go:181] (0xc00003a0b0) (0xc0007300a0) Create stream\nI0904 14:17:10.136002 2705 log.go:181] (0xc00003a0b0) (0xc0007300a0) Stream added, broadcasting: 5\nI0904 14:17:10.136940 2705 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0904 14:17:10.220199 2705 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 14:17:10.220231 2705 log.go:181] (0xc0007300a0) (5) Data frame handling\nI0904 14:17:10.220245 2705 log.go:181] (0xc0007300a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:17:10.269054 2705 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 14:17:10.269113 2705 log.go:181] (0xc0005a6140) (3) Data frame handling\nI0904 14:17:10.269138 2705 log.go:181] (0xc0005a6140) (3) Data frame sent\nI0904 14:17:10.269175 2705 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0904 14:17:10.269189 2705 log.go:181] (0xc0007300a0) (5) Data frame handling\nI0904 14:17:10.269227 2705 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0904 14:17:10.269270 2705 log.go:181] (0xc0005a6140) (3) Data frame handling\nI0904 14:17:10.271814 2705 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0904 14:17:10.271853 2705 log.go:181] (0xc000730000) (1) Data frame handling\nI0904 14:17:10.271869 2705 log.go:181] (0xc000730000) (1) Data frame sent\nI0904 14:17:10.271884 2705 log.go:181] (0xc00003a0b0) (0xc000730000) Stream removed, broadcasting: 1\nI0904 14:17:10.272312 2705 log.go:181] (0xc00003a0b0) (0xc000730000) Stream removed, broadcasting: 1\nI0904 14:17:10.272335 2705 log.go:181] (0xc00003a0b0) (0xc0005a6140) Stream removed, broadcasting: 3\nI0904 14:17:10.272352 2705 log.go:181] (0xc00003a0b0) (0xc0007300a0) Stream removed, broadcasting: 5\n" Sep 4 14:17:10.283: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:17:10.283: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 14:17:20.315: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 4 14:17:30.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-630 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:17:30.710: INFO: stderr: "I0904 14:17:30.614291 2723 log.go:181] (0xc000562e70) (0xc000c2a500) Create stream\nI0904 14:17:30.614366 2723 log.go:181] (0xc000562e70) (0xc000c2a500) Stream added, broadcasting: 1\nI0904 14:17:30.616871 2723 log.go:181] (0xc000562e70) Reply frame received for 1\nI0904 14:17:30.616941 2723 log.go:181] (0xc000562e70) (0xc000891360) Create stream\nI0904 14:17:30.616971 2723 log.go:181] (0xc000562e70) (0xc000891360) Stream added, broadcasting: 3\nI0904 14:17:30.618666 2723 log.go:181] (0xc000562e70) Reply frame received for 3\nI0904 14:17:30.618703 2723 log.go:181] (0xc000562e70) (0xc000c2a000) Create stream\nI0904 14:17:30.618716 2723 log.go:181] (0xc000562e70) (0xc000c2a000) Stream added, broadcasting: 5\nI0904 14:17:30.619595 2723 log.go:181] (0xc000562e70) Reply frame received for 5\nI0904 14:17:30.698145 2723 log.go:181] (0xc000562e70) Data frame received for 5\nI0904 14:17:30.698184 2723 log.go:181] (0xc000c2a000) (5) Data frame handling\nI0904 14:17:30.698209 2723 log.go:181] (0xc000c2a000) (5) Data frame sent\nI0904 14:17:30.698220 2723 log.go:181] (0xc000562e70) Data frame received for 5\nI0904 14:17:30.698230 2723 log.go:181] (0xc000c2a000) (5) Data frame handling\nI0904 14:17:30.698244 2723 log.go:181] (0xc000562e70) Data frame received for 3\nI0904 14:17:30.698251 2723 log.go:181] (0xc000891360) (3) Data frame handling\nI0904 14:17:30.698257 2723 log.go:181] (0xc000891360) (3) Data frame sent\nI0904 14:17:30.698264 2723 log.go:181] (0xc000562e70) Data frame received for 3\nI0904 14:17:30.698272 2723 log.go:181] (0xc000891360) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:17:30.699331 2723 log.go:181] (0xc000562e70) Data frame received for 1\nI0904 14:17:30.699366 2723 log.go:181] (0xc000c2a500) (1) Data frame handling\nI0904 14:17:30.699380 2723 log.go:181] (0xc000c2a500) (1) Data frame sent\nI0904 14:17:30.699393 2723 log.go:181] (0xc000562e70) (0xc000c2a500) Stream removed, broadcasting: 1\nI0904 14:17:30.699412 2723 log.go:181] (0xc000562e70) Go away received\nI0904 14:17:30.699836 2723 log.go:181] (0xc000562e70) (0xc000c2a500) Stream removed, broadcasting: 1\nI0904 14:17:30.699848 2723 log.go:181] (0xc000562e70) (0xc000891360) Stream removed, broadcasting: 3\nI0904 14:17:30.699854 2723 log.go:181] (0xc000562e70) (0xc000c2a000) Stream removed, broadcasting: 5\n" Sep 4 14:17:30.710: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:17:30.710: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:17:40.729: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:17:40.729: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 4 14:17:40.729: INFO: Waiting for Pod statefulset-630/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 4 14:17:50.750: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update Sep 4 14:17:50.750: INFO: Waiting for Pod statefulset-630/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 4 14:18:00.736: INFO: Waiting for StatefulSet statefulset-630/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 14:18:10.738: INFO: Deleting all statefulset in ns statefulset-630 Sep 4 14:18:10.741: INFO: Scaling statefulset ss2 to 0 Sep 4 14:18:30.779: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:18:30.782: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:18:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-630" for this suite. • [SLOW TEST:181.928 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":214,"skipped":3528,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:18:30.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 4 14:18:30.905: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:18:38.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3294" for this suite. • [SLOW TEST:7.944 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":215,"skipped":3542,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:18:38.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:18:39.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:18:41.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825919, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825919, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825920, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734825919, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:18:44.946: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 4 14:18:49.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config attach --namespace=webhook-2114 to-be-attached-pod -i -c=container1' Sep 4 14:18:49.205: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:18:49.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2114" for this suite. STEP: Destroying namespace "webhook-2114-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.701 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":216,"skipped":3547,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:18:49.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 4 14:18:49.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9151' Sep 4 14:18:49.843: INFO: stderr: "" Sep 4 14:18:49.843: INFO: stdout: "pod/pause created\n" Sep 4 14:18:49.843: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 4 14:18:49.843: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9151" to be "running and ready" Sep 4 14:18:50.274: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 430.908536ms Sep 4 14:18:52.279: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435480149s Sep 4 14:18:54.282: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.438253352s Sep 4 14:18:54.282: INFO: Pod "pause" satisfied condition "running and ready" Sep 4 14:18:54.282: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 4 14:18:54.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9151' Sep 4 14:18:54.428: INFO: stderr: "" Sep 4 14:18:54.429: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 4 14:18:54.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9151' Sep 4 14:18:54.537: INFO: stderr: "" Sep 4 14:18:54.537: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 4 14:18:54.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9151' Sep 4 14:18:54.642: INFO: stderr: "" Sep 4 14:18:54.642: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 4 14:18:54.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9151' Sep 4 14:18:54.768: INFO: stderr: "" Sep 4 14:18:54.768: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 4 14:18:54.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9151' Sep 4 14:18:55.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:18:55.244: INFO: stdout: "pod \"pause\" force deleted\n" Sep 4 14:18:55.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9151' Sep 4 14:18:55.416: INFO: stderr: "No resources found in kubectl-9151 namespace.\n" Sep 4 14:18:55.416: INFO: stdout: "" Sep 4 14:18:55.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9151 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 4 14:18:55.518: INFO: stderr: "" Sep 4 14:18:55.518: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:18:55.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9151" for this suite. • [SLOW TEST:6.064 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":217,"skipped":3553,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:18:55.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:18:56.026: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 4 14:18:59.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 create -f -' Sep 4 14:19:03.019: INFO: stderr: "" Sep 4 14:19:03.019: INFO: stdout: "e2e-test-crd-publish-openapi-4038-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 4 14:19:03.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 delete e2e-test-crd-publish-openapi-4038-crds test-cr' Sep 4 14:19:03.149: INFO: stderr: "" Sep 4 14:19:03.149: INFO: stdout: "e2e-test-crd-publish-openapi-4038-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 4 14:19:03.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 apply -f -' Sep 4 14:19:03.483: INFO: stderr: "" Sep 4 14:19:03.483: INFO: stdout: "e2e-test-crd-publish-openapi-4038-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 4 14:19:03.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8657 delete e2e-test-crd-publish-openapi-4038-crds test-cr' Sep 4 14:19:03.611: INFO: stderr: "" Sep 4 14:19:03.611: INFO: stdout: "e2e-test-crd-publish-openapi-4038-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 4 14:19:03.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4038-crds' Sep 4 14:19:03.977: INFO: stderr: "" Sep 4 14:19:03.977: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4038-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:19:06.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8657" for this suite. • [SLOW TEST:11.454 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":218,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:19:06.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 4 14:19:15.227: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:15.269: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:17.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:17.273: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:19.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:19.273: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:21.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:21.274: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:23.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:23.272: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:25.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:25.272: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:27.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:27.274: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:29.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:29.273: INFO: Pod pod-with-poststart-http-hook still exists Sep 4 14:19:31.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 4 14:19:31.273: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:19:31.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7707" for this suite. • [SLOW TEST:24.302 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":219,"skipped":3600,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:19:31.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:19:31.378: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-48bd2f7e-4028-4976-9f8d-7041d490a1e5" in namespace "security-context-test-6693" to be "Succeeded or Failed" Sep 4 14:19:31.413: INFO: Pod "alpine-nnp-false-48bd2f7e-4028-4976-9f8d-7041d490a1e5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.821626ms Sep 4 14:19:33.417: INFO: Pod "alpine-nnp-false-48bd2f7e-4028-4976-9f8d-7041d490a1e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039452211s Sep 4 14:19:35.437: INFO: Pod "alpine-nnp-false-48bd2f7e-4028-4976-9f8d-7041d490a1e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058590181s Sep 4 14:19:35.437: INFO: Pod "alpine-nnp-false-48bd2f7e-4028-4976-9f8d-7041d490a1e5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:19:35.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6693" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3608,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:19:35.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-15c6ea23-2967-46f5-b946-88c71b0fb56a in namespace container-probe-8893 Sep 4 14:19:41.798: INFO: Started pod test-webserver-15c6ea23-2967-46f5-b946-88c71b0fb56a in namespace container-probe-8893 STEP: checking the pod's current state and verifying that restartCount is present Sep 4 14:19:41.801: INFO: Initial restart count of pod test-webserver-15c6ea23-2967-46f5-b946-88c71b0fb56a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:23:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8893" for this suite. • [SLOW TEST:248.025 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3613,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:23:43.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3784 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3784 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3784 Sep 4 14:23:44.525: INFO: Found 0 stateful pods, waiting for 1 Sep 4 14:23:54.530: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 4 14:23:54.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:23:54.801: INFO: stderr: "I0904 14:23:54.688657 2982 log.go:181] (0xc00084b6b0) (0xc0007dab40) Create stream\nI0904 14:23:54.688842 2982 log.go:181] (0xc00084b6b0) (0xc0007dab40) Stream added, broadcasting: 1\nI0904 14:23:54.691797 2982 log.go:181] (0xc00084b6b0) Reply frame received for 1\nI0904 14:23:54.691852 2982 log.go:181] (0xc00084b6b0) (0xc000d32780) Create stream\nI0904 14:23:54.691891 2982 log.go:181] (0xc00084b6b0) (0xc000d32780) Stream added, broadcasting: 3\nI0904 14:23:54.693369 2982 log.go:181] (0xc00084b6b0) Reply frame received for 3\nI0904 14:23:54.693413 2982 log.go:181] (0xc00084b6b0) (0xc00053a460) Create stream\nI0904 14:23:54.693445 2982 log.go:181] (0xc00084b6b0) (0xc00053a460) Stream added, broadcasting: 5\nI0904 14:23:54.694469 2982 log.go:181] (0xc00084b6b0) Reply frame received for 5\nI0904 14:23:54.761443 2982 log.go:181] (0xc00084b6b0) Data frame received for 5\nI0904 14:23:54.761467 2982 log.go:181] (0xc00053a460) (5) Data frame handling\nI0904 14:23:54.761480 2982 log.go:181] (0xc00053a460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:23:54.791079 2982 log.go:181] (0xc00084b6b0) Data frame received for 5\nI0904 14:23:54.791123 2982 log.go:181] (0xc00053a460) (5) Data frame handling\nI0904 14:23:54.791148 2982 log.go:181] (0xc00084b6b0) Data frame received for 3\nI0904 14:23:54.791158 2982 log.go:181] (0xc000d32780) (3) Data frame handling\nI0904 14:23:54.791170 2982 log.go:181] (0xc000d32780) (3) Data frame sent\nI0904 14:23:54.791189 2982 log.go:181] (0xc00084b6b0) Data frame received for 3\nI0904 14:23:54.791218 2982 log.go:181] (0xc000d32780) (3) Data frame handling\nI0904 14:23:54.793110 2982 log.go:181] (0xc00084b6b0) Data frame received for 1\nI0904 14:23:54.793132 2982 log.go:181] (0xc0007dab40) (1) Data frame handling\nI0904 14:23:54.793140 2982 log.go:181] (0xc0007dab40) (1) Data frame sent\nI0904 14:23:54.793152 2982 log.go:181] (0xc00084b6b0) (0xc0007dab40) Stream removed, broadcasting: 1\nI0904 14:23:54.793242 2982 log.go:181] (0xc00084b6b0) Go away received\nI0904 14:23:54.793405 2982 log.go:181] (0xc00084b6b0) (0xc0007dab40) Stream removed, broadcasting: 1\nI0904 14:23:54.793416 2982 log.go:181] (0xc00084b6b0) (0xc000d32780) Stream removed, broadcasting: 3\nI0904 14:23:54.793421 2982 log.go:181] (0xc00084b6b0) (0xc00053a460) Stream removed, broadcasting: 5\n" Sep 4 14:23:54.801: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:23:54.801: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 14:23:54.804: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 4 14:24:04.809: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 4 14:24:04.809: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:24:04.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999387s Sep 4 14:24:05.831: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990806674s Sep 4 14:24:06.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986572322s Sep 4 14:24:07.840: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981653857s Sep 4 14:24:08.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977403987s Sep 4 14:24:09.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972809514s Sep 4 14:24:10.872: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967774001s Sep 4 14:24:11.875: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946387235s Sep 4 14:24:12.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.942696171s Sep 4 14:24:13.885: INFO: Verifying statefulset ss doesn't scale past 1 for another 937.467395ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3784 Sep 4 14:24:14.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:24:15.091: INFO: stderr: "I0904 14:24:15.024057 3001 log.go:181] (0xc0005b36b0) (0xc000c2e8c0) Create stream\nI0904 14:24:15.024110 3001 log.go:181] (0xc0005b36b0) (0xc000c2e8c0) Stream added, broadcasting: 1\nI0904 14:24:15.027985 3001 log.go:181] (0xc0005b36b0) Reply frame received for 1\nI0904 14:24:15.028019 3001 log.go:181] (0xc0005b36b0) (0xc000c2e000) Create stream\nI0904 14:24:15.028029 3001 log.go:181] (0xc0005b36b0) (0xc000c2e000) Stream added, broadcasting: 3\nI0904 14:24:15.028991 3001 log.go:181] (0xc0005b36b0) Reply frame received for 3\nI0904 14:24:15.029026 3001 log.go:181] (0xc0005b36b0) (0xc000889ea0) Create stream\nI0904 14:24:15.029041 3001 log.go:181] (0xc0005b36b0) (0xc000889ea0) Stream added, broadcasting: 5\nI0904 14:24:15.029973 3001 log.go:181] (0xc0005b36b0) Reply frame received for 5\nI0904 14:24:15.080688 3001 log.go:181] (0xc0005b36b0) Data frame received for 3\nI0904 14:24:15.080718 3001 log.go:181] (0xc000c2e000) (3) Data frame handling\nI0904 14:24:15.080875 3001 log.go:181] (0xc0005b36b0) Data frame received for 5\nI0904 14:24:15.080910 3001 log.go:181] (0xc000889ea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:24:15.080953 3001 log.go:181] (0xc000c2e000) (3) Data frame sent\nI0904 14:24:15.080991 3001 log.go:181] (0xc0005b36b0) Data frame received for 3\nI0904 14:24:15.081002 3001 log.go:181] (0xc000c2e000) (3) Data frame handling\nI0904 14:24:15.081039 3001 log.go:181] (0xc000889ea0) (5) Data frame sent\nI0904 14:24:15.081082 3001 log.go:181] (0xc0005b36b0) Data frame received for 5\nI0904 14:24:15.081110 3001 log.go:181] (0xc000889ea0) (5) Data frame handling\nI0904 14:24:15.082073 3001 log.go:181] (0xc0005b36b0) Data frame received for 1\nI0904 14:24:15.082125 3001 log.go:181] (0xc000c2e8c0) (1) Data frame handling\nI0904 14:24:15.082162 3001 log.go:181] (0xc000c2e8c0) (1) Data frame sent\nI0904 14:24:15.082192 3001 log.go:181] (0xc0005b36b0) (0xc000c2e8c0) Stream removed, broadcasting: 1\nI0904 14:24:15.082229 3001 log.go:181] (0xc0005b36b0) Go away received\nI0904 14:24:15.082565 3001 log.go:181] (0xc0005b36b0) (0xc000c2e8c0) Stream removed, broadcasting: 1\nI0904 14:24:15.082587 3001 log.go:181] (0xc0005b36b0) (0xc000c2e000) Stream removed, broadcasting: 3\nI0904 14:24:15.082599 3001 log.go:181] (0xc0005b36b0) (0xc000889ea0) Stream removed, broadcasting: 5\n" Sep 4 14:24:15.091: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:24:15.091: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:24:15.095: INFO: Found 1 stateful pods, waiting for 3 Sep 4 14:24:25.101: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:24:25.102: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:24:25.102: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 4 14:24:25.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:24:25.330: INFO: stderr: "I0904 14:24:25.239778 3019 log.go:181] (0xc00003b4a0) (0xc000bee6e0) Create stream\nI0904 14:24:25.239844 3019 log.go:181] (0xc00003b4a0) (0xc000bee6e0) Stream added, broadcasting: 1\nI0904 14:24:25.242520 3019 log.go:181] (0xc00003b4a0) Reply frame received for 1\nI0904 14:24:25.242566 3019 log.go:181] (0xc00003b4a0) (0xc0005d4000) Create stream\nI0904 14:24:25.242582 3019 log.go:181] (0xc00003b4a0) (0xc0005d4000) Stream added, broadcasting: 3\nI0904 14:24:25.243535 3019 log.go:181] (0xc00003b4a0) Reply frame received for 3\nI0904 14:24:25.243569 3019 log.go:181] (0xc00003b4a0) (0xc00063c000) Create stream\nI0904 14:24:25.243580 3019 log.go:181] (0xc00003b4a0) (0xc00063c000) Stream added, broadcasting: 5\nI0904 14:24:25.244363 3019 log.go:181] (0xc00003b4a0) Reply frame received for 5\nI0904 14:24:25.321974 3019 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0904 14:24:25.322013 3019 log.go:181] (0xc00063c000) (5) Data frame handling\nI0904 14:24:25.322027 3019 log.go:181] (0xc00063c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:24:25.322046 3019 log.go:181] (0xc00003b4a0) Data frame received for 3\nI0904 14:24:25.322053 3019 log.go:181] (0xc0005d4000) (3) Data frame handling\nI0904 14:24:25.322067 3019 log.go:181] (0xc0005d4000) (3) Data frame sent\nI0904 14:24:25.322075 3019 log.go:181] (0xc00003b4a0) Data frame received for 3\nI0904 14:24:25.322081 3019 log.go:181] (0xc0005d4000) (3) Data frame handling\nI0904 14:24:25.322126 3019 log.go:181] (0xc00003b4a0) Data frame received for 5\nI0904 14:24:25.322159 3019 log.go:181] (0xc00063c000) (5) Data frame handling\nI0904 14:24:25.323307 3019 log.go:181] (0xc00003b4a0) Data frame received for 1\nI0904 14:24:25.323331 3019 log.go:181] (0xc000bee6e0) (1) Data frame handling\nI0904 14:24:25.323340 3019 log.go:181] (0xc000bee6e0) (1) Data frame sent\nI0904 14:24:25.323349 3019 log.go:181] (0xc00003b4a0) (0xc000bee6e0) Stream removed, broadcasting: 1\nI0904 14:24:25.323366 3019 log.go:181] (0xc00003b4a0) Go away received\nI0904 14:24:25.323727 3019 log.go:181] (0xc00003b4a0) (0xc000bee6e0) Stream removed, broadcasting: 1\nI0904 14:24:25.323742 3019 log.go:181] (0xc00003b4a0) (0xc0005d4000) Stream removed, broadcasting: 3\nI0904 14:24:25.323748 3019 log.go:181] (0xc00003b4a0) (0xc00063c000) Stream removed, broadcasting: 5\n" Sep 4 14:24:25.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:24:25.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 14:24:25.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:24:25.616: INFO: stderr: "I0904 14:24:25.487327 3035 log.go:181] (0xc000d18000) (0xc0009a1ea0) Create stream\nI0904 14:24:25.487388 3035 log.go:181] (0xc000d18000) (0xc0009a1ea0) Stream added, broadcasting: 1\nI0904 14:24:25.489580 3035 log.go:181] (0xc000d18000) Reply frame received for 1\nI0904 14:24:25.489620 3035 log.go:181] (0xc000d18000) (0xc0008b2640) Create stream\nI0904 14:24:25.489629 3035 log.go:181] (0xc000d18000) (0xc0008b2640) Stream added, broadcasting: 3\nI0904 14:24:25.491089 3035 log.go:181] (0xc000d18000) Reply frame received for 3\nI0904 14:24:25.491132 3035 log.go:181] (0xc000d18000) (0xc0008b3360) Create stream\nI0904 14:24:25.491150 3035 log.go:181] (0xc000d18000) (0xc0008b3360) Stream added, broadcasting: 5\nI0904 14:24:25.493173 3035 log.go:181] (0xc000d18000) Reply frame received for 5\nI0904 14:24:25.558322 3035 log.go:181] (0xc000d18000) Data frame received for 5\nI0904 14:24:25.558339 3035 log.go:181] (0xc0008b3360) (5) Data frame handling\nI0904 14:24:25.558352 3035 log.go:181] (0xc0008b3360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:24:25.602143 3035 log.go:181] (0xc000d18000) Data frame received for 5\nI0904 14:24:25.602304 3035 log.go:181] (0xc0008b3360) (5) Data frame handling\nI0904 14:24:25.602351 3035 log.go:181] (0xc000d18000) Data frame received for 3\nI0904 14:24:25.602379 3035 log.go:181] (0xc0008b2640) (3) Data frame handling\nI0904 14:24:25.602413 3035 log.go:181] (0xc0008b2640) (3) Data frame sent\nI0904 14:24:25.602438 3035 log.go:181] (0xc000d18000) Data frame received for 3\nI0904 14:24:25.602457 3035 log.go:181] (0xc0008b2640) (3) Data frame handling\nI0904 14:24:25.604851 3035 log.go:181] (0xc000d18000) Data frame received for 1\nI0904 14:24:25.604957 3035 log.go:181] (0xc0009a1ea0) (1) Data frame handling\nI0904 14:24:25.605039 3035 log.go:181] (0xc0009a1ea0) (1) Data frame sent\nI0904 14:24:25.605103 3035 log.go:181] (0xc000d18000) (0xc0009a1ea0) Stream removed, broadcasting: 1\nI0904 14:24:25.605184 3035 log.go:181] (0xc000d18000) Go away received\nI0904 14:24:25.605570 3035 log.go:181] (0xc000d18000) (0xc0009a1ea0) Stream removed, broadcasting: 1\nI0904 14:24:25.605635 3035 log.go:181] (0xc000d18000) (0xc0008b2640) Stream removed, broadcasting: 3\nI0904 14:24:25.605675 3035 log.go:181] (0xc000d18000) (0xc0008b3360) Stream removed, broadcasting: 5\n" Sep 4 14:24:25.616: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:24:25.616: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 14:24:25.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 4 14:24:25.869: INFO: stderr: "I0904 14:24:25.741545 3053 log.go:181] (0xc000d1d3f0) (0xc000762960) Create stream\nI0904 14:24:25.741597 3053 log.go:181] (0xc000d1d3f0) (0xc000762960) Stream added, broadcasting: 1\nI0904 14:24:25.746887 3053 log.go:181] (0xc000d1d3f0) Reply frame received for 1\nI0904 14:24:25.746945 3053 log.go:181] (0xc000d1d3f0) (0xc000762000) Create stream\nI0904 14:24:25.746966 3053 log.go:181] (0xc000d1d3f0) (0xc000762000) Stream added, broadcasting: 3\nI0904 14:24:25.747803 3053 log.go:181] (0xc000d1d3f0) Reply frame received for 3\nI0904 14:24:25.747830 3053 log.go:181] (0xc000d1d3f0) (0xc0007620a0) Create stream\nI0904 14:24:25.747838 3053 log.go:181] (0xc000d1d3f0) (0xc0007620a0) Stream added, broadcasting: 5\nI0904 14:24:25.748929 3053 log.go:181] (0xc000d1d3f0) Reply frame received for 5\nI0904 14:24:25.823415 3053 log.go:181] (0xc000d1d3f0) Data frame received for 5\nI0904 14:24:25.823457 3053 log.go:181] (0xc0007620a0) (5) Data frame handling\nI0904 14:24:25.823483 3053 log.go:181] (0xc0007620a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0904 14:24:25.859380 3053 log.go:181] (0xc000d1d3f0) Data frame received for 5\nI0904 14:24:25.859430 3053 log.go:181] (0xc000d1d3f0) Data frame received for 3\nI0904 14:24:25.859456 3053 log.go:181] (0xc000762000) (3) Data frame handling\nI0904 14:24:25.859478 3053 log.go:181] (0xc000762000) (3) Data frame sent\nI0904 14:24:25.859485 3053 log.go:181] (0xc000d1d3f0) Data frame received for 3\nI0904 14:24:25.859490 3053 log.go:181] (0xc000762000) (3) Data frame handling\nI0904 14:24:25.859522 3053 log.go:181] (0xc0007620a0) (5) Data frame handling\nI0904 14:24:25.861976 3053 log.go:181] (0xc000d1d3f0) Data frame received for 1\nI0904 14:24:25.862008 3053 log.go:181] (0xc000762960) (1) Data frame handling\nI0904 14:24:25.862022 3053 log.go:181] (0xc000762960) (1) Data frame sent\nI0904 14:24:25.862036 3053 log.go:181] (0xc000d1d3f0) (0xc000762960) Stream removed, broadcasting: 1\nI0904 14:24:25.862054 3053 log.go:181] (0xc000d1d3f0) Go away received\nI0904 14:24:25.862470 3053 log.go:181] (0xc000d1d3f0) (0xc000762960) Stream removed, broadcasting: 1\nI0904 14:24:25.862493 3053 log.go:181] (0xc000d1d3f0) (0xc000762000) Stream removed, broadcasting: 3\nI0904 14:24:25.862504 3053 log.go:181] (0xc000d1d3f0) (0xc0007620a0) Stream removed, broadcasting: 5\n" Sep 4 14:24:25.869: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 4 14:24:25.869: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 4 14:24:25.869: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:24:25.872: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 4 14:24:35.881: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 4 14:24:35.881: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 4 14:24:35.881: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 4 14:24:35.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999587s Sep 4 14:24:36.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975786626s Sep 4 14:24:37.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970829527s Sep 4 14:24:39.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964836258s Sep 4 14:24:40.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.861117086s Sep 4 14:24:41.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.855307226s Sep 4 14:24:42.045: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849848959s Sep 4 14:24:43.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.845121107s Sep 4 14:24:44.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.840254406s Sep 4 14:24:45.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 834.790222ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3784 Sep 4 14:24:46.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:24:46.351: INFO: stderr: "I0904 14:24:46.257765 3072 log.go:181] (0xc0005bb550) (0xc0005b2aa0) Create stream\nI0904 14:24:46.257835 3072 log.go:181] (0xc0005bb550) (0xc0005b2aa0) Stream added, broadcasting: 1\nI0904 14:24:46.262746 3072 log.go:181] (0xc0005bb550) Reply frame received for 1\nI0904 14:24:46.262778 3072 log.go:181] (0xc0005bb550) (0xc0003086e0) Create stream\nI0904 14:24:46.262788 3072 log.go:181] (0xc0005bb550) (0xc0003086e0) Stream added, broadcasting: 3\nI0904 14:24:46.263678 3072 log.go:181] (0xc0005bb550) Reply frame received for 3\nI0904 14:24:46.263721 3072 log.go:181] (0xc0005bb550) (0xc0000caaa0) Create stream\nI0904 14:24:46.263732 3072 log.go:181] (0xc0005bb550) (0xc0000caaa0) Stream added, broadcasting: 5\nI0904 14:24:46.264506 3072 log.go:181] (0xc0005bb550) Reply frame received for 5\nI0904 14:24:46.343214 3072 log.go:181] (0xc0005bb550) Data frame received for 3\nI0904 14:24:46.343242 3072 log.go:181] (0xc0003086e0) (3) Data frame handling\nI0904 14:24:46.343254 3072 log.go:181] (0xc0003086e0) (3) Data frame sent\nI0904 14:24:46.343262 3072 log.go:181] (0xc0005bb550) Data frame received for 3\nI0904 14:24:46.343268 3072 log.go:181] (0xc0003086e0) (3) Data frame handling\nI0904 14:24:46.343298 3072 log.go:181] (0xc0005bb550) Data frame received for 5\nI0904 14:24:46.343305 3072 log.go:181] (0xc0000caaa0) (5) Data frame handling\nI0904 14:24:46.343317 3072 log.go:181] (0xc0000caaa0) (5) Data frame sent\nI0904 14:24:46.343324 3072 log.go:181] (0xc0005bb550) Data frame received for 5\nI0904 14:24:46.343329 3072 log.go:181] (0xc0000caaa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:24:46.344534 3072 log.go:181] (0xc0005bb550) Data frame received for 1\nI0904 14:24:46.344559 3072 log.go:181] (0xc0005b2aa0) (1) Data frame handling\nI0904 14:24:46.344569 3072 log.go:181] (0xc0005b2aa0) (1) Data frame sent\nI0904 14:24:46.344580 3072 log.go:181] (0xc0005bb550) (0xc0005b2aa0) Stream removed, broadcasting: 1\nI0904 14:24:46.344635 3072 log.go:181] (0xc0005bb550) Go away received\nI0904 14:24:46.344916 3072 log.go:181] (0xc0005bb550) (0xc0005b2aa0) Stream removed, broadcasting: 1\nI0904 14:24:46.344930 3072 log.go:181] (0xc0005bb550) (0xc0003086e0) Stream removed, broadcasting: 3\nI0904 14:24:46.344936 3072 log.go:181] (0xc0005bb550) (0xc0000caaa0) Stream removed, broadcasting: 5\n" Sep 4 14:24:46.351: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:24:46.351: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:24:46.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:24:46.580: INFO: stderr: "I0904 14:24:46.495588 3089 log.go:181] (0xc000e33080) (0xc000ba6960) Create stream\nI0904 14:24:46.495639 3089 log.go:181] (0xc000e33080) (0xc000ba6960) Stream added, broadcasting: 1\nI0904 14:24:46.508823 3089 log.go:181] (0xc000e33080) Reply frame received for 1\nI0904 14:24:46.508888 3089 log.go:181] (0xc000e33080) (0xc000ba6000) Create stream\nI0904 14:24:46.508902 3089 log.go:181] (0xc000e33080) (0xc000ba6000) Stream added, broadcasting: 3\nI0904 14:24:46.510354 3089 log.go:181] (0xc000e33080) Reply frame received for 3\nI0904 14:24:46.510398 3089 log.go:181] (0xc000e33080) (0xc000ba60a0) Create stream\nI0904 14:24:46.510406 3089 log.go:181] (0xc000e33080) (0xc000ba60a0) Stream added, broadcasting: 5\nI0904 14:24:46.512714 3089 log.go:181] (0xc000e33080) Reply frame received for 5\nI0904 14:24:46.568916 3089 log.go:181] (0xc000e33080) Data frame received for 5\nI0904 14:24:46.568947 3089 log.go:181] (0xc000ba60a0) (5) Data frame handling\nI0904 14:24:46.568955 3089 log.go:181] (0xc000ba60a0) (5) Data frame sent\nI0904 14:24:46.568961 3089 log.go:181] (0xc000e33080) Data frame received for 5\nI0904 14:24:46.568966 3089 log.go:181] (0xc000ba60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:24:46.568989 3089 log.go:181] (0xc000e33080) Data frame received for 3\nI0904 14:24:46.568996 3089 log.go:181] (0xc000ba6000) (3) Data frame handling\nI0904 14:24:46.569009 3089 log.go:181] (0xc000ba6000) (3) Data frame sent\nI0904 14:24:46.569019 3089 log.go:181] (0xc000e33080) Data frame received for 3\nI0904 14:24:46.569024 3089 log.go:181] (0xc000ba6000) (3) Data frame handling\nI0904 14:24:46.570329 3089 log.go:181] (0xc000e33080) Data frame received for 1\nI0904 14:24:46.570357 3089 log.go:181] (0xc000ba6960) (1) Data frame handling\nI0904 14:24:46.570374 3089 log.go:181] (0xc000ba6960) (1) Data frame sent\nI0904 14:24:46.570391 3089 log.go:181] (0xc000e33080) (0xc000ba6960) Stream removed, broadcasting: 1\nI0904 14:24:46.570418 3089 log.go:181] (0xc000e33080) Go away received\nI0904 14:24:46.570715 3089 log.go:181] (0xc000e33080) (0xc000ba6960) Stream removed, broadcasting: 1\nI0904 14:24:46.570728 3089 log.go:181] (0xc000e33080) (0xc000ba6000) Stream removed, broadcasting: 3\nI0904 14:24:46.570737 3089 log.go:181] (0xc000e33080) (0xc000ba60a0) Stream removed, broadcasting: 5\n" Sep 4 14:24:46.580: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:24:46.580: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:24:46.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3784 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 4 14:24:46.805: INFO: stderr: "I0904 14:24:46.709526 3107 log.go:181] (0xc000aef080) (0xc0003cdcc0) Create stream\nI0904 14:24:46.709576 3107 log.go:181] (0xc000aef080) (0xc0003cdcc0) Stream added, broadcasting: 1\nI0904 14:24:46.714279 3107 log.go:181] (0xc000aef080) Reply frame received for 1\nI0904 14:24:46.714326 3107 log.go:181] (0xc000aef080) (0xc000d12000) Create stream\nI0904 14:24:46.714340 3107 log.go:181] (0xc000aef080) (0xc000d12000) Stream added, broadcasting: 3\nI0904 14:24:46.715202 3107 log.go:181] (0xc000aef080) Reply frame received for 3\nI0904 14:24:46.715242 3107 log.go:181] (0xc000aef080) (0xc000309ea0) Create stream\nI0904 14:24:46.715257 3107 log.go:181] (0xc000aef080) (0xc000309ea0) Stream added, broadcasting: 5\nI0904 14:24:46.716212 3107 log.go:181] (0xc000aef080) Reply frame received for 5\nI0904 14:24:46.800862 3107 log.go:181] (0xc000aef080) Data frame received for 3\nI0904 14:24:46.800889 3107 log.go:181] (0xc000d12000) (3) Data frame handling\nI0904 14:24:46.800899 3107 log.go:181] (0xc000d12000) (3) Data frame sent\nI0904 14:24:46.800907 3107 log.go:181] (0xc000aef080) Data frame received for 3\nI0904 14:24:46.800912 3107 log.go:181] (0xc000d12000) (3) Data frame handling\nI0904 14:24:46.800995 3107 log.go:181] (0xc000aef080) Data frame received for 5\nI0904 14:24:46.801021 3107 log.go:181] (0xc000309ea0) (5) Data frame handling\nI0904 14:24:46.801043 3107 log.go:181] (0xc000309ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0904 14:24:46.801055 3107 log.go:181] (0xc000aef080) Data frame received for 5\nI0904 14:24:46.801088 3107 log.go:181] (0xc000309ea0) (5) Data frame handling\nI0904 14:24:46.801953 3107 log.go:181] (0xc000aef080) Data frame received for 1\nI0904 14:24:46.801965 3107 log.go:181] (0xc0003cdcc0) (1) Data frame handling\nI0904 14:24:46.801979 3107 log.go:181] (0xc0003cdcc0) (1) Data frame sent\nI0904 14:24:46.801989 3107 log.go:181] (0xc000aef080) (0xc0003cdcc0) Stream removed, broadcasting: 1\nI0904 14:24:46.802055 3107 log.go:181] (0xc000aef080) Go away received\nI0904 14:24:46.802217 3107 log.go:181] (0xc000aef080) (0xc0003cdcc0) Stream removed, broadcasting: 1\nI0904 14:24:46.802228 3107 log.go:181] (0xc000aef080) (0xc000d12000) Stream removed, broadcasting: 3\nI0904 14:24:46.802234 3107 log.go:181] (0xc000aef080) (0xc000309ea0) Stream removed, broadcasting: 5\n" Sep 4 14:24:46.805: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 4 14:24:46.805: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 4 14:24:46.805: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 14:25:26.822: INFO: Deleting all statefulset in ns statefulset-3784 Sep 4 14:25:26.825: INFO: Scaling statefulset ss to 0 Sep 4 14:25:26.835: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:25:26.838: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:25:26.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3784" for this suite. • [SLOW TEST:103.423 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":222,"skipped":3616,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:25:26.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-294c1d67-4c57-4683-b961-a957eb8094c8 STEP: Creating a pod to test consume configMaps Sep 4 14:25:27.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7" in namespace "configmap-5730" to be "Succeeded or Failed" Sep 4 14:25:27.031: INFO: Pod "pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.009946ms Sep 4 14:25:29.035: INFO: Pod "pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0113093s Sep 4 14:25:31.039: INFO: Pod "pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015229611s STEP: Saw pod success Sep 4 14:25:31.039: INFO: Pod "pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7" satisfied condition "Succeeded or Failed" Sep 4 14:25:31.042: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7 container configmap-volume-test: STEP: delete the pod Sep 4 14:25:31.231: INFO: Waiting for pod pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7 to disappear Sep 4 14:25:31.383: INFO: Pod pod-configmaps-977894b5-8862-4f05-9eae-56d2f8c0fdb7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:25:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5730" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3634,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:25:31.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 4 14:25:35.708: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:25:35.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6355" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:25:35.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-lsxz STEP: Creating a pod to test atomic-volume-subpath Sep 4 14:25:35.963: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lsxz" in namespace "subpath-7650" to be "Succeeded or Failed" Sep 4 14:25:35.966: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931743ms Sep 4 14:25:37.969: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006461475s Sep 4 14:25:39.974: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 4.010959435s Sep 4 14:25:41.978: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 6.015329572s Sep 4 14:25:43.983: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 8.020073429s Sep 4 14:25:45.987: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 10.024214665s Sep 4 14:25:47.991: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 12.028448187s Sep 4 14:25:49.995: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 14.032418382s Sep 4 14:25:52.001: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 16.038524669s Sep 4 14:25:54.005: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 18.04235213s Sep 4 14:25:56.009: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 20.046036822s Sep 4 14:25:58.012: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Running", Reason="", readiness=true. Elapsed: 22.049207267s Sep 4 14:26:00.016: INFO: Pod "pod-subpath-test-downwardapi-lsxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052919494s STEP: Saw pod success Sep 4 14:26:00.016: INFO: Pod "pod-subpath-test-downwardapi-lsxz" satisfied condition "Succeeded or Failed" Sep 4 14:26:00.018: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-lsxz container test-container-subpath-downwardapi-lsxz: STEP: delete the pod Sep 4 14:26:00.050: INFO: Waiting for pod pod-subpath-test-downwardapi-lsxz to disappear Sep 4 14:26:00.065: INFO: Pod pod-subpath-test-downwardapi-lsxz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lsxz Sep 4 14:26:00.065: INFO: Deleting pod "pod-subpath-test-downwardapi-lsxz" in namespace "subpath-7650" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:26:00.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7650" for this suite. • [SLOW TEST:24.230 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":225,"skipped":3703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:26:00.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:26:00.432: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 4 14:26:01.750: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:26:03.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1324" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":226,"skipped":3727,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:26:03.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:27:04.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7083" for this suite. • [SLOW TEST:60.853 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3737,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:27:04.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:27:11.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9751" for this suite. STEP: Destroying namespace "nsdeletetest-9524" for this suite. Sep 4 14:27:11.650: INFO: Namespace nsdeletetest-9524 was already deleted STEP: Destroying namespace "nsdeletetest-1854" for this suite. • [SLOW TEST:7.465 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":228,"skipped":3743,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:27:11.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0904 14:27:52.533793 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:28:54.555: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 4 14:28:54.555: INFO: Deleting pod "simpletest.rc-46v7s" in namespace "gc-3836" Sep 4 14:28:54.589: INFO: Deleting pod "simpletest.rc-6wfvr" in namespace "gc-3836" Sep 4 14:28:54.690: INFO: Deleting pod "simpletest.rc-dl7lh" in namespace "gc-3836" Sep 4 14:28:54.742: INFO: Deleting pod "simpletest.rc-g9vv8" in namespace "gc-3836" Sep 4 14:28:55.353: INFO: Deleting pod "simpletest.rc-j2hfm" in namespace "gc-3836" Sep 4 14:28:55.818: INFO: Deleting pod "simpletest.rc-kf9w7" in namespace "gc-3836" Sep 4 14:28:56.026: INFO: Deleting pod "simpletest.rc-m5jzr" in namespace "gc-3836" Sep 4 14:28:56.483: INFO: Deleting pod "simpletest.rc-nttl2" in namespace "gc-3836" Sep 4 14:28:57.040: INFO: Deleting pod "simpletest.rc-p2gjv" in namespace "gc-3836" Sep 4 14:28:57.511: INFO: Deleting pod "simpletest.rc-x5kpg" in namespace "gc-3836" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:28:57.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3836" for this suite. • [SLOW TEST:106.113 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":229,"skipped":3747,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:28:57.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 4 14:28:59.334: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3392 /api/v1/namespaces/watch-3392/configmaps/e2e-watch-test-resource-version a7bc5cc8-aa5d-4763-8726-44aafa2690ed 6825907 0 2020-09-04 14:28:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-04 14:28:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:28:59.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3392 /api/v1/namespaces/watch-3392/configmaps/e2e-watch-test-resource-version a7bc5cc8-aa5d-4763-8726-44aafa2690ed 6825910 0 2020-09-04 14:28:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-04 14:28:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:28:59.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3392" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":230,"skipped":3758,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:28:59.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 14:29:10.128: INFO: DNS probes using dns-test-5f5c199d-b3af-446b-a966-91f883fbb584 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 14:29:20.746: INFO: File wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:20.748: INFO: File jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:20.748: INFO: Lookups using dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 failed for: [wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local] Sep 4 14:29:25.905: INFO: File wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:25.909: INFO: File jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:25.909: INFO: Lookups using dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 failed for: [wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local] Sep 4 14:29:30.758: INFO: File wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:30.763: INFO: File jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:30.763: INFO: Lookups using dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 failed for: [wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local] Sep 4 14:29:35.754: INFO: File wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:35.759: INFO: File jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local from pod dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 4 14:29:35.759: INFO: Lookups using dns-1136/dns-test-05dc749b-349e-481c-8f46-84ac81604df7 failed for: [wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local] Sep 4 14:29:40.757: INFO: DNS probes using dns-test-05dc749b-349e-481c-8f46-84ac81604df7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1136.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1136.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 14:29:50.014: INFO: DNS probes using dns-test-41431f10-cf42-49e6-baf1-08b552e422a9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:29:50.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1136" for this suite. • [SLOW TEST:51.550 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":231,"skipped":3779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:29:50.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 4 14:29:51.554: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 4 14:29:51.557: INFO: starting watch STEP: patching STEP: updating Sep 4 14:29:51.607: INFO: waiting for watch events with expected annotations Sep 4 14:29:51.607: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:29:51.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1802" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":232,"skipped":3808,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:29:51.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 4 14:29:51.890: INFO: Waiting up to 5m0s for pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2" in namespace "var-expansion-6834" to be "Succeeded or Failed" Sep 4 14:29:51.929: INFO: Pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.503575ms Sep 4 14:29:53.933: INFO: Pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042881927s Sep 4 14:29:55.976: INFO: Pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085450797s Sep 4 14:29:57.980: INFO: Pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08974176s STEP: Saw pod success Sep 4 14:29:57.980: INFO: Pod "var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2" satisfied condition "Succeeded or Failed" Sep 4 14:29:57.983: INFO: Trying to get logs from node latest-worker pod var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2 container dapi-container: STEP: delete the pod Sep 4 14:29:58.029: INFO: Waiting for pod var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2 to disappear Sep 4 14:29:58.068: INFO: Pod var-expansion-533f904a-4a1c-4465-a2f9-0733e0d0a0e2 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:29:58.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6834" for this suite. • [SLOW TEST:6.372 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":233,"skipped":3820,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:29:58.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 4 14:30:08.229: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:08.236: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:10.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:10.240: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:12.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:12.241: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:14.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:14.241: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:16.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:16.241: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:18.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:18.239: INFO: Pod pod-with-prestop-http-hook still exists Sep 4 14:30:20.236: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 4 14:30:20.241: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:30:20.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3339" for this suite. • [SLOW TEST:22.183 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3827,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:30:20.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:30:55.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1478" for this suite. • [SLOW TEST:35.781 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3832,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:30:56.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-52908095-dcc6-4bc7-8343-6df8c0e99e67 STEP: Creating a pod to test consume configMaps Sep 4 14:30:56.128: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270" in namespace "projected-3110" to be "Succeeded or Failed" Sep 4 14:30:56.170: INFO: Pod "pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270": Phase="Pending", Reason="", readiness=false. Elapsed: 42.58284ms Sep 4 14:30:58.291: INFO: Pod "pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163114713s Sep 4 14:31:00.295: INFO: Pod "pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166810167s STEP: Saw pod success Sep 4 14:31:00.295: INFO: Pod "pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270" satisfied condition "Succeeded or Failed" Sep 4 14:31:00.297: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270 container projected-configmap-volume-test: STEP: delete the pod Sep 4 14:31:00.380: INFO: Waiting for pod pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270 to disappear Sep 4 14:31:00.387: INFO: Pod pod-projected-configmaps-de532a52-d1c9-433d-80fc-d54799216270 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:31:00.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3110" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":3841,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:31:00.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 4 14:31:05.047: INFO: Successfully updated pod "pod-update-c8baa4d3-33e5-4198-8e2c-4b9ae981fbad" STEP: verifying the updated pod is in kubernetes Sep 4 14:31:05.071: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:31:05.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8525" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:31:05.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 4 14:31:05.170: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix797749512/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:31:05.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5751" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":238,"skipped":3911,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:31:05.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:31:06.466: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:31:08.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:31:10.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734826666, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:31:13.751: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:31:14.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1263" for this suite. STEP: Destroying namespace "webhook-1263-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.905 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":239,"skipped":3932,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:31:14.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 4 14:31:14.253: INFO: Waiting up to 5m0s for pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e" in namespace "emptydir-2237" to be "Succeeded or Failed" Sep 4 14:31:14.276: INFO: Pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.709352ms Sep 4 14:31:16.386: INFO: Pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133496634s Sep 4 14:31:18.391: INFO: Pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138152107s Sep 4 14:31:20.395: INFO: Pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142365866s STEP: Saw pod success Sep 4 14:31:20.395: INFO: Pod "pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e" satisfied condition "Succeeded or Failed" Sep 4 14:31:20.399: INFO: Trying to get logs from node latest-worker2 pod pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e container test-container: STEP: delete the pod Sep 4 14:31:20.449: INFO: Waiting for pod pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e to disappear Sep 4 14:31:20.490: INFO: Pod pod-b7645493-8496-40e6-9fa6-c2ac4ff9478e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:31:20.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2237" for this suite. • [SLOW TEST:6.335 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3937,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:31:20.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 1 pods STEP: Gathering metrics W0904 14:31:22.106661 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:32:24.131: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:32:24.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-899" for this suite. • [SLOW TEST:63.639 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":241,"skipped":3937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:32:24.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 4 14:32:24.244: INFO: Waiting up to 5m0s for pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b" in namespace "var-expansion-6916" to be "Succeeded or Failed" Sep 4 14:32:24.259: INFO: Pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.018132ms Sep 4 14:32:26.617: INFO: Pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372081039s Sep 4 14:32:28.621: INFO: Pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376094581s Sep 4 14:32:30.624: INFO: Pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.379585418s STEP: Saw pod success Sep 4 14:32:30.624: INFO: Pod "var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b" satisfied condition "Succeeded or Failed" Sep 4 14:32:30.626: INFO: Trying to get logs from node latest-worker pod var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b container dapi-container: STEP: delete the pod Sep 4 14:32:30.679: INFO: Waiting for pod var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b to disappear Sep 4 14:32:30.692: INFO: Pod var-expansion-4ac90c39-c7ef-44b2-8fd5-59fc4fffae6b no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:32:30.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6916" for this suite. • [SLOW TEST:6.560 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":3990,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:32:30.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:32:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4645" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":243,"skipped":4003,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:32:30.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1487.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1487.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1487.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1487.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.5.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.5.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.5.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.5.153_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1487.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1487.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1487.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1487.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1487.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1487.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.5.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.5.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.5.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.5.153_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 14:32:39.206: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.209: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.211: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.214: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.238: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.240: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.243: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:39.263: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:32:44.275: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.279: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.282: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.285: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.308: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.311: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.314: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.317: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:44.336: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:32:49.268: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.318: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.321: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.324: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.327: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:49.343: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:32:54.268: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.305: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.308: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.311: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.314: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:54.333: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:32:59.268: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.298: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:32:59.334: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:33:04.269: INFO: Unable to read wheezy_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.272: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.274: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.276: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.303: INFO: Unable to read jessie_udp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.311: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local from pod dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b: the server could not find the requested resource (get pods dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b) Sep 4 14:33:04.329: INFO: Lookups using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b failed for: [wheezy_udp@dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@dns-test-service.dns-1487.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_udp@dns-test-service.dns-1487.svc.cluster.local jessie_tcp@dns-test-service.dns-1487.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1487.svc.cluster.local] Sep 4 14:33:09.357: INFO: DNS probes using dns-1487/dns-test-1ee25300-be2a-46fb-92ec-e12fc4aafa2b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:33:10.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1487" for this suite. • [SLOW TEST:39.495 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":244,"skipped":4005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:33:10.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6872 STEP: creating service affinity-nodeport in namespace services-6872 STEP: creating replication controller affinity-nodeport in namespace services-6872 I0904 14:33:10.785567 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6872, replica count: 3 I0904 14:33:13.835973 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:33:16.836113 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:33:19.836371 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 14:33:19.847: INFO: Creating new exec pod Sep 4 14:33:24.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6872 execpod-affinityfgmdg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 4 14:33:28.689: INFO: stderr: "I0904 14:33:28.609887 3139 log.go:181] (0xc00018c370) (0xc000126820) Create stream\nI0904 14:33:28.610008 3139 log.go:181] (0xc00018c370) (0xc000126820) Stream added, broadcasting: 1\nI0904 14:33:28.612022 3139 log.go:181] (0xc00018c370) Reply frame received for 1\nI0904 14:33:28.612067 3139 log.go:181] (0xc00018c370) (0xc0001270e0) Create stream\nI0904 14:33:28.612079 3139 log.go:181] (0xc00018c370) (0xc0001270e0) Stream added, broadcasting: 3\nI0904 14:33:28.613255 3139 log.go:181] (0xc00018c370) Reply frame received for 3\nI0904 14:33:28.613289 3139 log.go:181] (0xc00018c370) (0xc000127860) Create stream\nI0904 14:33:28.613299 3139 log.go:181] (0xc00018c370) (0xc000127860) Stream added, broadcasting: 5\nI0904 14:33:28.614225 3139 log.go:181] (0xc00018c370) Reply frame received for 5\nI0904 14:33:28.676108 3139 log.go:181] (0xc00018c370) Data frame received for 5\nI0904 14:33:28.676134 3139 log.go:181] (0xc000127860) (5) Data frame handling\nI0904 14:33:28.676153 3139 log.go:181] (0xc000127860) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0904 14:33:28.676434 3139 log.go:181] (0xc00018c370) Data frame received for 5\nI0904 14:33:28.676488 3139 log.go:181] (0xc000127860) (5) Data frame handling\nI0904 14:33:28.676518 3139 log.go:181] (0xc000127860) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0904 14:33:28.676629 3139 log.go:181] (0xc00018c370) Data frame received for 5\nI0904 14:33:28.676644 3139 log.go:181] (0xc000127860) (5) Data frame handling\nI0904 14:33:28.676898 3139 log.go:181] (0xc00018c370) Data frame received for 3\nI0904 14:33:28.676923 3139 log.go:181] (0xc0001270e0) (3) Data frame handling\nI0904 14:33:28.678536 3139 log.go:181] (0xc00018c370) Data frame received for 1\nI0904 14:33:28.678550 3139 log.go:181] (0xc000126820) (1) Data frame handling\nI0904 14:33:28.678561 3139 log.go:181] (0xc000126820) (1) Data frame sent\nI0904 14:33:28.678576 3139 log.go:181] (0xc00018c370) (0xc000126820) Stream removed, broadcasting: 1\nI0904 14:33:28.678680 3139 log.go:181] (0xc00018c370) Go away received\nI0904 14:33:28.678888 3139 log.go:181] (0xc00018c370) (0xc000126820) Stream removed, broadcasting: 1\nI0904 14:33:28.678904 3139 log.go:181] (0xc00018c370) (0xc0001270e0) Stream removed, broadcasting: 3\nI0904 14:33:28.678911 3139 log.go:181] (0xc00018c370) (0xc000127860) Stream removed, broadcasting: 5\n" Sep 4 14:33:28.689: INFO: stdout: "" Sep 4 14:33:28.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6872 execpod-affinityfgmdg -- /bin/sh -x -c nc -zv -t -w 2 10.99.132.141 80' Sep 4 14:33:28.918: INFO: stderr: "I0904 14:33:28.832639 3157 log.go:181] (0xc00053e0b0) (0xc000c561e0) Create stream\nI0904 14:33:28.832705 3157 log.go:181] (0xc00053e0b0) (0xc000c561e0) Stream added, broadcasting: 1\nI0904 14:33:28.835379 3157 log.go:181] (0xc00053e0b0) Reply frame received for 1\nI0904 14:33:28.835419 3157 log.go:181] (0xc00053e0b0) (0xc000536000) Create stream\nI0904 14:33:28.835437 3157 log.go:181] (0xc00053e0b0) (0xc000536000) Stream added, broadcasting: 3\nI0904 14:33:28.836272 3157 log.go:181] (0xc00053e0b0) Reply frame received for 3\nI0904 14:33:28.836309 3157 log.go:181] (0xc00053e0b0) (0xc0005360a0) Create stream\nI0904 14:33:28.836318 3157 log.go:181] (0xc00053e0b0) (0xc0005360a0) Stream added, broadcasting: 5\nI0904 14:33:28.837242 3157 log.go:181] (0xc00053e0b0) Reply frame received for 5\nI0904 14:33:28.903921 3157 log.go:181] (0xc00053e0b0) Data frame received for 3\nI0904 14:33:28.903950 3157 log.go:181] (0xc000536000) (3) Data frame handling\nI0904 14:33:28.903969 3157 log.go:181] (0xc00053e0b0) Data frame received for 5\nI0904 14:33:28.903980 3157 log.go:181] (0xc0005360a0) (5) Data frame handling\nI0904 14:33:28.904001 3157 log.go:181] (0xc0005360a0) (5) Data frame sent\nI0904 14:33:28.904011 3157 log.go:181] (0xc00053e0b0) Data frame received for 5\nI0904 14:33:28.904018 3157 log.go:181] (0xc0005360a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.132.141 80\nConnection to 10.99.132.141 80 port [tcp/http] succeeded!\nI0904 14:33:28.905325 3157 log.go:181] (0xc00053e0b0) Data frame received for 1\nI0904 14:33:28.905371 3157 log.go:181] (0xc000c561e0) (1) Data frame handling\nI0904 14:33:28.905394 3157 log.go:181] (0xc000c561e0) (1) Data frame sent\nI0904 14:33:28.905407 3157 log.go:181] (0xc00053e0b0) (0xc000c561e0) Stream removed, broadcasting: 1\nI0904 14:33:28.905436 3157 log.go:181] (0xc00053e0b0) Go away received\nI0904 14:33:28.905729 3157 log.go:181] (0xc00053e0b0) (0xc000c561e0) Stream removed, broadcasting: 1\nI0904 14:33:28.905746 3157 log.go:181] (0xc00053e0b0) (0xc000536000) Stream removed, broadcasting: 3\nI0904 14:33:28.905756 3157 log.go:181] (0xc00053e0b0) (0xc0005360a0) Stream removed, broadcasting: 5\n" Sep 4 14:33:28.918: INFO: stdout: "" Sep 4 14:33:28.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6872 execpod-affinityfgmdg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31047' Sep 4 14:33:29.135: INFO: stderr: "I0904 14:33:29.063540 3173 log.go:181] (0xc00003a420) (0xc000d4a1e0) Create stream\nI0904 14:33:29.063598 3173 log.go:181] (0xc00003a420) (0xc000d4a1e0) Stream added, broadcasting: 1\nI0904 14:33:29.065303 3173 log.go:181] (0xc00003a420) Reply frame received for 1\nI0904 14:33:29.065331 3173 log.go:181] (0xc00003a420) (0xc00072e320) Create stream\nI0904 14:33:29.065339 3173 log.go:181] (0xc00003a420) (0xc00072e320) Stream added, broadcasting: 3\nI0904 14:33:29.065927 3173 log.go:181] (0xc00003a420) Reply frame received for 3\nI0904 14:33:29.065952 3173 log.go:181] (0xc00003a420) (0xc000a94fa0) Create stream\nI0904 14:33:29.065959 3173 log.go:181] (0xc00003a420) (0xc000a94fa0) Stream added, broadcasting: 5\nI0904 14:33:29.066532 3173 log.go:181] (0xc00003a420) Reply frame received for 5\nI0904 14:33:29.122950 3173 log.go:181] (0xc00003a420) Data frame received for 5\nI0904 14:33:29.122975 3173 log.go:181] (0xc000a94fa0) (5) Data frame handling\nI0904 14:33:29.122986 3173 log.go:181] (0xc000a94fa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 31047\nConnection to 172.18.0.11 31047 port [tcp/31047] succeeded!\nI0904 14:33:29.123299 3173 log.go:181] (0xc00003a420) Data frame received for 3\nI0904 14:33:29.123334 3173 log.go:181] (0xc00072e320) (3) Data frame handling\nI0904 14:33:29.123517 3173 log.go:181] (0xc00003a420) Data frame received for 5\nI0904 14:33:29.123535 3173 log.go:181] (0xc000a94fa0) (5) Data frame handling\nI0904 14:33:29.124716 3173 log.go:181] (0xc00003a420) Data frame received for 1\nI0904 14:33:29.124830 3173 log.go:181] (0xc000d4a1e0) (1) Data frame handling\nI0904 14:33:29.124848 3173 log.go:181] (0xc000d4a1e0) (1) Data frame sent\nI0904 14:33:29.125003 3173 log.go:181] (0xc00003a420) (0xc000d4a1e0) Stream removed, broadcasting: 1\nI0904 14:33:29.125318 3173 log.go:181] (0xc00003a420) (0xc000d4a1e0) Stream removed, broadcasting: 1\nI0904 14:33:29.125346 3173 log.go:181] (0xc00003a420) (0xc00072e320) Stream removed, broadcasting: 3\nI0904 14:33:29.125459 3173 log.go:181] (0xc00003a420) (0xc000a94fa0) Stream removed, broadcasting: 5\n" Sep 4 14:33:29.135: INFO: stdout: "" Sep 4 14:33:29.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6872 execpod-affinityfgmdg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31047' Sep 4 14:33:29.361: INFO: stderr: "I0904 14:33:29.289566 3191 log.go:181] (0xc0009de160) (0xc000a1c960) Create stream\nI0904 14:33:29.289623 3191 log.go:181] (0xc0009de160) (0xc000a1c960) Stream added, broadcasting: 1\nI0904 14:33:29.292035 3191 log.go:181] (0xc0009de160) Reply frame received for 1\nI0904 14:33:29.292076 3191 log.go:181] (0xc0009de160) (0xc000a1ca00) Create stream\nI0904 14:33:29.292083 3191 log.go:181] (0xc0009de160) (0xc000a1ca00) Stream added, broadcasting: 3\nI0904 14:33:29.293037 3191 log.go:181] (0xc0009de160) Reply frame received for 3\nI0904 14:33:29.293067 3191 log.go:181] (0xc0009de160) (0xc000b8c500) Create stream\nI0904 14:33:29.293076 3191 log.go:181] (0xc0009de160) (0xc000b8c500) Stream added, broadcasting: 5\nI0904 14:33:29.293701 3191 log.go:181] (0xc0009de160) Reply frame received for 5\nI0904 14:33:29.352306 3191 log.go:181] (0xc0009de160) Data frame received for 5\nI0904 14:33:29.352347 3191 log.go:181] (0xc000b8c500) (5) Data frame handling\nI0904 14:33:29.352355 3191 log.go:181] (0xc000b8c500) (5) Data frame sent\nI0904 14:33:29.352360 3191 log.go:181] (0xc0009de160) Data frame received for 5\nI0904 14:33:29.352364 3191 log.go:181] (0xc000b8c500) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31047\nConnection to 172.18.0.14 31047 port [tcp/31047] succeeded!\nI0904 14:33:29.352417 3191 log.go:181] (0xc0009de160) Data frame received for 3\nI0904 14:33:29.352476 3191 log.go:181] (0xc000a1ca00) (3) Data frame handling\nI0904 14:33:29.353693 3191 log.go:181] (0xc0009de160) Data frame received for 1\nI0904 14:33:29.353785 3191 log.go:181] (0xc000a1c960) (1) Data frame handling\nI0904 14:33:29.353810 3191 log.go:181] (0xc000a1c960) (1) Data frame sent\nI0904 14:33:29.353837 3191 log.go:181] (0xc0009de160) (0xc000a1c960) Stream removed, broadcasting: 1\nI0904 14:33:29.353857 3191 log.go:181] (0xc0009de160) Go away received\nI0904 14:33:29.354381 3191 log.go:181] (0xc0009de160) (0xc000a1c960) Stream removed, broadcasting: 1\nI0904 14:33:29.354406 3191 log.go:181] (0xc0009de160) (0xc000a1ca00) Stream removed, broadcasting: 3\nI0904 14:33:29.354417 3191 log.go:181] (0xc0009de160) (0xc000b8c500) Stream removed, broadcasting: 5\n" Sep 4 14:33:29.361: INFO: stdout: "" Sep 4 14:33:29.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6872 execpod-affinityfgmdg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31047/ ; done' Sep 4 14:33:29.680: INFO: stderr: "I0904 14:33:29.490970 3207 log.go:181] (0xc000e06c60) (0xc000378960) Create stream\nI0904 14:33:29.491021 3207 log.go:181] (0xc000e06c60) (0xc000378960) Stream added, broadcasting: 1\nI0904 14:33:29.496526 3207 log.go:181] (0xc000e06c60) Reply frame received for 1\nI0904 14:33:29.496582 3207 log.go:181] (0xc000e06c60) (0xc00043dd60) Create stream\nI0904 14:33:29.496604 3207 log.go:181] (0xc000e06c60) (0xc00043dd60) Stream added, broadcasting: 3\nI0904 14:33:29.497554 3207 log.go:181] (0xc000e06c60) Reply frame received for 3\nI0904 14:33:29.497579 3207 log.go:181] (0xc000e06c60) (0xc0008a8000) Create stream\nI0904 14:33:29.497585 3207 log.go:181] (0xc000e06c60) (0xc0008a8000) Stream added, broadcasting: 5\nI0904 14:33:29.498274 3207 log.go:181] (0xc000e06c60) Reply frame received for 5\nI0904 14:33:29.571666 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.571714 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.571753 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.571764 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.571781 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.571820 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.575251 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.575270 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.575284 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.575989 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.576010 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.576017 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.576030 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.576058 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.576074 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.582353 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.582371 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.582385 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.583194 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.583225 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.583238 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.583247 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.583256 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.583278 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.583304 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.583325 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.583335 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.588014 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.588031 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.588041 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.588867 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.588882 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.588897 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.588923 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.588937 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.588953 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.593712 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.593728 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.593738 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.594424 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.594456 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.594494 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.594511 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.594532 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.594543 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.600056 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.600075 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.600084 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.600425 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.600445 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.600464 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.605192 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.605209 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.605223 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.606357 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.606376 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.606392 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.607052 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.607071 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.607097 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.607115 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.607120 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.607126 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.607131 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.607136 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.607158 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.611443 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.611461 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.611477 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.611880 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.611899 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.611923 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.611934 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.611948 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.611956 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.617153 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.617178 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.617193 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.617999 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.618081 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.618097 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.618110 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.618119 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.618132 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.623686 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.623711 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.623726 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.624345 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.624370 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.624380 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.624394 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.624402 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.624409 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.629194 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.629214 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.629236 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.629960 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.629977 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.629989 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.630073 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.630098 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.630122 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.635692 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.635708 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.635723 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.636631 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.636652 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.636671 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.636708 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.636846 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.636881 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.643025 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.643048 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.643063 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.644041 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.644152 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.644166 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.644173 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.644177 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.644190 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0904 14:33:29.644197 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.644206 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.644211 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.648195 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.648215 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.648239 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.648872 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.648899 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.648912 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.648920 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.648972 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.648986 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.653198 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.653223 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.653250 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.653691 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.653714 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.653745 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.653766 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.653776 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.653789 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.667077 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.667108 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.667123 3207 log.go:181] (0xc0008a8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31047/\nI0904 14:33:29.667149 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.667164 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.667174 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.667184 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.667192 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.667200 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.670815 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.670832 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.670842 3207 log.go:181] (0xc00043dd60) (3) Data frame sent\nI0904 14:33:29.671513 3207 log.go:181] (0xc000e06c60) Data frame received for 3\nI0904 14:33:29.671536 3207 log.go:181] (0xc00043dd60) (3) Data frame handling\nI0904 14:33:29.671567 3207 log.go:181] (0xc000e06c60) Data frame received for 5\nI0904 14:33:29.671582 3207 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0904 14:33:29.673156 3207 log.go:181] (0xc000e06c60) Data frame received for 1\nI0904 14:33:29.673175 3207 log.go:181] (0xc000378960) (1) Data frame handling\nI0904 14:33:29.673184 3207 log.go:181] (0xc000378960) (1) Data frame sent\nI0904 14:33:29.673195 3207 log.go:181] (0xc000e06c60) (0xc000378960) Stream removed, broadcasting: 1\nI0904 14:33:29.673233 3207 log.go:181] (0xc000e06c60) Go away received\nI0904 14:33:29.673545 3207 log.go:181] (0xc000e06c60) (0xc000378960) Stream removed, broadcasting: 1\nI0904 14:33:29.673557 3207 log.go:181] (0xc000e06c60) (0xc00043dd60) Stream removed, broadcasting: 3\nI0904 14:33:29.673562 3207 log.go:181] (0xc000e06c60) (0xc0008a8000) Stream removed, broadcasting: 5\n" Sep 4 14:33:29.681: INFO: stdout: "\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l\naffinity-nodeport-f888l" Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Received response from host: affinity-nodeport-f888l Sep 4 14:33:29.681: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6872, will wait for the garbage collector to delete the pods Sep 4 14:33:29.839: INFO: Deleting ReplicationController affinity-nodeport took: 5.406662ms Sep 4 14:33:30.539: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.183316ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:33:40.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6872" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.595 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":245,"skipped":4033,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:33:40.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-24483c20-2d32-4aea-96a1-066af3e9becd STEP: Creating a pod to test consume secrets Sep 4 14:33:40.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd" in namespace "projected-4336" to be "Succeeded or Failed" Sep 4 14:33:40.251: INFO: Pod "pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 64.086422ms Sep 4 14:33:42.254: INFO: Pod "pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06719294s Sep 4 14:33:44.259: INFO: Pod "pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071589129s STEP: Saw pod success Sep 4 14:33:44.259: INFO: Pod "pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd" satisfied condition "Succeeded or Failed" Sep 4 14:33:44.262: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd container projected-secret-volume-test: STEP: delete the pod Sep 4 14:33:44.299: INFO: Waiting for pod pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd to disappear Sep 4 14:33:44.479: INFO: Pod pod-projected-secrets-01e68854-a48a-4a24-b014-9947b6e0a4bd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:33:44.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4336" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4050,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:33:44.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 4 14:33:44.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1190' Sep 4 14:33:45.168: INFO: stderr: "" Sep 4 14:33:45.168: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 4 14:33:46.215: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:46.215: INFO: Found 0 / 1 Sep 4 14:33:47.179: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:47.179: INFO: Found 0 / 1 Sep 4 14:33:48.201: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:48.201: INFO: Found 0 / 1 Sep 4 14:33:49.179: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:49.179: INFO: Found 1 / 1 Sep 4 14:33:49.179: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 4 14:33:49.182: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:49.182: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 4 14:33:49.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config patch pod agnhost-primary-lnfvh --namespace=kubectl-1190 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 4 14:33:49.306: INFO: stderr: "" Sep 4 14:33:49.306: INFO: stdout: "pod/agnhost-primary-lnfvh patched\n" STEP: checking annotations Sep 4 14:33:49.340: INFO: Selector matched 1 pods for map[app:agnhost] Sep 4 14:33:49.340: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:33:49.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1190" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":247,"skipped":4054,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:33:49.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 4 14:34:01.736: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:01.748: INFO: Pod pod-with-poststart-exec-hook still exists Sep 4 14:34:03.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:03.753: INFO: Pod pod-with-poststart-exec-hook still exists Sep 4 14:34:05.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:05.753: INFO: Pod pod-with-poststart-exec-hook still exists Sep 4 14:34:07.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:07.753: INFO: Pod pod-with-poststart-exec-hook still exists Sep 4 14:34:09.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:09.753: INFO: Pod pod-with-poststart-exec-hook still exists Sep 4 14:34:11.749: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 4 14:34:11.752: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:34:11.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8962" for this suite. • [SLOW TEST:22.410 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":4056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:34:11.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:34:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-807" for this suite. • [SLOW TEST:11.126 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":249,"skipped":4080,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:34:22.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-311.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-311.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-311.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-311.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 4 14:34:31.056: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.059: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.062: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.070: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.073: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.076: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.078: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:31.084: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:34:36.095: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:36.098: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:36.111: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:36.114: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:36.120: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:34:41.095: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:41.099: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:41.114: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:41.116: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:41.120: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:34:46.094: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:46.097: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:46.114: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:46.150: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:46.193: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:34:51.142: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:51.145: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:51.159: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:51.162: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:51.167: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:34:56.098: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:56.101: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:56.121: INFO: Unable to read jessie_udp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:56.123: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local from pod dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb: the server could not find the requested resource (get pods dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb) Sep 4 14:34:56.128: INFO: Lookups using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb failed for: [wheezy_udp@dns-test-service-2.dns-311.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-311.svc.cluster.local jessie_udp@dns-test-service-2.dns-311.svc.cluster.local jessie_tcp@dns-test-service-2.dns-311.svc.cluster.local] Sep 4 14:35:01.128: INFO: DNS probes using dns-311/dns-test-4d7bdaae-5789-4b29-bf5e-328ff2e82fdb succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:35:01.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-311" for this suite. • [SLOW TEST:38.416 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":250,"skipped":4085,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:35:01.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7571 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7571 STEP: Creating statefulset with conflicting port in namespace statefulset-7571 STEP: Waiting until pod test-pod will start running in namespace statefulset-7571 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7571 Sep 4 14:35:08.498: INFO: Observed stateful pod in namespace: statefulset-7571, name: ss-0, uid: c7933eba-bc9a-47d7-9d83-123a61a04c4d, status phase: Pending. Waiting for statefulset controller to delete. Sep 4 14:35:08.782: INFO: Observed stateful pod in namespace: statefulset-7571, name: ss-0, uid: c7933eba-bc9a-47d7-9d83-123a61a04c4d, status phase: Failed. Waiting for statefulset controller to delete. Sep 4 14:35:08.829: INFO: Observed stateful pod in namespace: statefulset-7571, name: ss-0, uid: c7933eba-bc9a-47d7-9d83-123a61a04c4d, status phase: Failed. Waiting for statefulset controller to delete. Sep 4 14:35:08.873: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7571 STEP: Removing pod with conflicting port in namespace statefulset-7571 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7571 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 14:35:15.032: INFO: Deleting all statefulset in ns statefulset-7571 Sep 4 14:35:15.055: INFO: Scaling statefulset ss to 0 Sep 4 14:35:35.078: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:35:35.081: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:35:35.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7571" for this suite. • [SLOW TEST:33.826 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":251,"skipped":4099,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:35:35.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5503, will wait for the garbage collector to delete the pods Sep 4 14:35:41.290: INFO: Deleting Job.batch foo took: 6.332514ms Sep 4 14:35:41.391: INFO: Terminating Job.batch foo pods took: 100.328066ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:36:20.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5503" for this suite. • [SLOW TEST:44.973 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":252,"skipped":4113,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:36:20.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 4 14:36:20.158: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 4 14:36:20.187: INFO: Waiting for terminating namespaces to be deleted... Sep 4 14:36:20.196: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 4 14:36:20.202: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container app ready: true, restart count 0 Sep 4 14:36:20.202: INFO: daemon-set-ff4l6 from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container app ready: true, restart count 0 Sep 4 14:36:20.202: INFO: live6 from default started at 2020-08-30 11:51:51 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container live6 ready: false, restart count 0 Sep 4 14:36:20.202: INFO: test-recreate-deployment-f79dd4667-n4rtn from deployment-6445 started at 2020-08-28 02:33:33 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container httpd ready: true, restart count 0 Sep 4 14:36:20.202: INFO: bono-7b5b98574f-j2wlq from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.202: INFO: Container bono ready: true, restart count 0 Sep 4 14:36:20.202: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.202: INFO: chronos-678bcff97d-665n9 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.202: INFO: Container chronos ready: true, restart count 0 Sep 4 14:36:20.202: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.202: INFO: homer-6d85c54796-5grhn from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container homer ready: true, restart count 0 Sep 4 14:36:20.202: INFO: homestead-prov-54ddb995c5-phmgj from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container homestead-prov ready: true, restart count 0 Sep 4 14:36:20.202: INFO: live-test from ims-fqddr started at 2020-08-30 10:33:20 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container live-test ready: false, restart count 0 Sep 4 14:36:20.202: INFO: ralf-645db98795-l7gpf from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.202: INFO: Container ralf ready: true, restart count 0 Sep 4 14:36:20.202: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.202: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 14:36:20.202: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.202: INFO: Container kube-proxy ready: true, restart count 0 Sep 4 14:36:20.202: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 4 14:36:20.209: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container app ready: true, restart count 0 Sep 4 14:36:20.209: INFO: daemon-set-6qbhl from daemonsets-8598 started at 2020-08-26 01:17:55 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container app ready: true, restart count 0 Sep 4 14:36:20.209: INFO: live3 from default started at 2020-08-30 11:14:22 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container live3 ready: false, restart count 0 Sep 4 14:36:20.209: INFO: live4 from default started at 2020-08-30 11:19:29 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container live4 ready: false, restart count 0 Sep 4 14:36:20.209: INFO: live5 from default started at 2020-08-30 11:22:52 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container live5 ready: false, restart count 0 Sep 4 14:36:20.209: INFO: astaire-66c5667484-7s6hd from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.209: INFO: Container astaire ready: true, restart count 0 Sep 4 14:36:20.209: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.209: INFO: cassandra-bf5b4886d-w9qkb from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container cassandra ready: true, restart count 0 Sep 4 14:36:20.209: INFO: ellis-668f49999b-84cll from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container ellis ready: true, restart count 0 Sep 4 14:36:20.209: INFO: etcd-744b4d9f98-5bm8d from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container etcd ready: true, restart count 0 Sep 4 14:36:20.209: INFO: homestead-59959889bd-dh787 from ims-fqddr started at 2020-08-30 10:27:30 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.209: INFO: Container homestead ready: true, restart count 0 Sep 4 14:36:20.209: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.209: INFO: sprout-b4bbc5c49-m9nqx from ims-fqddr started at 2020-08-30 10:27:31 +0000 UTC (2 container statuses recorded) Sep 4 14:36:20.209: INFO: Container sprout ready: true, restart count 0 Sep 4 14:36:20.209: INFO: Container tailer ready: true, restart count 0 Sep 4 14:36:20.209: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container kindnet-cni ready: true, restart count 1 Sep 4 14:36:20.209: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Sep 4 14:36:20.209: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Sep 4 14:36:20.288: INFO: Pod daemon-set-64t9w requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod daemon-set-jxhg7 requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod daemon-set-6qbhl requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod daemon-set-ff4l6 requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod test-recreate-deployment-f79dd4667-n4rtn requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod astaire-66c5667484-7s6hd requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod bono-7b5b98574f-j2wlq requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod cassandra-bf5b4886d-w9qkb requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod chronos-678bcff97d-665n9 requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod ellis-668f49999b-84cll requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod etcd-744b4d9f98-5bm8d requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod homer-6d85c54796-5grhn requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod homestead-59959889bd-dh787 requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod homestead-prov-54ddb995c5-phmgj requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod ralf-645db98795-l7gpf requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod sprout-b4bbc5c49-m9nqx requesting resource cpu=0m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod kindnet-gmpqb requesting resource cpu=100m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod kindnet-grzzh requesting resource cpu=100m on Node latest-worker2 Sep 4 14:36:20.288: INFO: Pod kube-proxy-82wrf requesting resource cpu=0m on Node latest-worker Sep 4 14:36:20.288: INFO: Pod kube-proxy-fjk8r requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Sep 4 14:36:20.288: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Sep 4 14:36:20.334: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76.16319b613cc81acb], Reason = [Started], Message = [Started container filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76] STEP: Considering event: Type = [Normal], Name = [filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593.16319b613038300d], Reason = [Started], Message = [Started container filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593] STEP: Considering event: Type = [Normal], Name = [filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76.16319b60ae8d6d3d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593.16319b609253646b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76.16319b61268af130], Reason = [Created], Message = [Created container filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76] STEP: Considering event: Type = [Normal], Name = [filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593.16319b611a0d486a], Reason = [Created], Message = [Created container filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593] STEP: Considering event: Type = [Normal], Name = [filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593.16319b603e6eb8a7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3988/filler-pod-0c00d3d5-4f90-49ab-9e2d-74c272ccf593 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76.16319b603f781193], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3988/filler-pod-53f1eb87-f734-4bf9-9a1f-58e0a3f56f76 to latest-worker2] STEP: Considering event: Type = [Warning], Name = [additional-pod.16319b61a8195276], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16319b61a9d9929a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:36:27.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3988" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.401 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":253,"skipped":4133,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:36:27.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 4 14:36:27.619: INFO: Waiting up to 1m0s for all nodes to be ready Sep 4 14:37:27.648: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:37:27.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 4 14:37:31.789: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:37:46.424: INFO: pods created so far: [1 1 1] Sep 4 14:37:46.424: INFO: length of pods created so far: 3 Sep 4 14:38:06.494: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:38:13.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7656" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:38:13.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8494" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:106.147 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":254,"skipped":4142,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:38:13.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 4 14:38:13.728: INFO: Waiting up to 5m0s for pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154" in namespace "downward-api-9529" to be "Succeeded or Failed" Sep 4 14:38:13.731: INFO: Pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154": Phase="Pending", Reason="", readiness=false. Elapsed: 3.289235ms Sep 4 14:38:15.736: INFO: Pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007749768s Sep 4 14:38:17.740: INFO: Pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011914099s Sep 4 14:38:20.118: INFO: Pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390343022s STEP: Saw pod success Sep 4 14:38:20.118: INFO: Pod "downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154" satisfied condition "Succeeded or Failed" Sep 4 14:38:20.122: INFO: Trying to get logs from node latest-worker2 pod downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154 container dapi-container: STEP: delete the pod Sep 4 14:38:20.161: INFO: Waiting for pod downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154 to disappear Sep 4 14:38:20.177: INFO: Pod downward-api-ab9dd318-4b6d-4b41-a6d7-ffd90ae2d154 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:38:20.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9529" for this suite. • [SLOW TEST:6.533 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4145,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:38:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:38:20.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5" in namespace "downward-api-3101" to be "Succeeded or Failed" Sep 4 14:38:20.520: INFO: Pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 226.72394ms Sep 4 14:38:22.523: INFO: Pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230038332s Sep 4 14:38:24.527: INFO: Pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234116338s Sep 4 14:38:26.537: INFO: Pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244593527s STEP: Saw pod success Sep 4 14:38:26.538: INFO: Pod "downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5" satisfied condition "Succeeded or Failed" Sep 4 14:38:26.542: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5 container client-container: STEP: delete the pod Sep 4 14:38:26.577: INFO: Waiting for pod downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5 to disappear Sep 4 14:38:26.608: INFO: Pod downwardapi-volume-4873516b-c4eb-4c8f-9ac4-93106c4f76a5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:38:26.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3101" for this suite. • [SLOW TEST:6.434 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4146,"failed":0} [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:38:26.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8574 STEP: creating service affinity-clusterip in namespace services-8574 STEP: creating replication controller affinity-clusterip in namespace services-8574 I0904 14:38:26.776319 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8574, replica count: 3 I0904 14:38:29.826776 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:38:32.827070 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:38:35.827373 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 14:38:35.833: INFO: Creating new exec pod Sep 4 14:38:40.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8574 execpod-affinity8v7t2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 4 14:38:41.114: INFO: stderr: "I0904 14:38:41.019629 3258 log.go:181] (0xc000291080) (0xc000b98500) Create stream\nI0904 14:38:41.019682 3258 log.go:181] (0xc000291080) (0xc000b98500) Stream added, broadcasting: 1\nI0904 14:38:41.024867 3258 log.go:181] (0xc000291080) Reply frame received for 1\nI0904 14:38:41.024909 3258 log.go:181] (0xc000291080) (0xc000c420a0) Create stream\nI0904 14:38:41.024917 3258 log.go:181] (0xc000291080) (0xc000c420a0) Stream added, broadcasting: 3\nI0904 14:38:41.025637 3258 log.go:181] (0xc000291080) Reply frame received for 3\nI0904 14:38:41.025666 3258 log.go:181] (0xc000291080) (0xc000b98000) Create stream\nI0904 14:38:41.025675 3258 log.go:181] (0xc000291080) (0xc000b98000) Stream added, broadcasting: 5\nI0904 14:38:41.026400 3258 log.go:181] (0xc000291080) Reply frame received for 5\nI0904 14:38:41.104840 3258 log.go:181] (0xc000291080) Data frame received for 5\nI0904 14:38:41.104875 3258 log.go:181] (0xc000b98000) (5) Data frame handling\nI0904 14:38:41.104885 3258 log.go:181] (0xc000b98000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0904 14:38:41.105032 3258 log.go:181] (0xc000291080) Data frame received for 5\nI0904 14:38:41.105049 3258 log.go:181] (0xc000b98000) (5) Data frame handling\nI0904 14:38:41.105064 3258 log.go:181] (0xc000b98000) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0904 14:38:41.105376 3258 log.go:181] (0xc000291080) Data frame received for 3\nI0904 14:38:41.105425 3258 log.go:181] (0xc000c420a0) (3) Data frame handling\nI0904 14:38:41.105451 3258 log.go:181] (0xc000291080) Data frame received for 5\nI0904 14:38:41.105464 3258 log.go:181] (0xc000b98000) (5) Data frame handling\nI0904 14:38:41.107242 3258 log.go:181] (0xc000291080) Data frame received for 1\nI0904 14:38:41.107266 3258 log.go:181] (0xc000b98500) (1) Data frame handling\nI0904 14:38:41.107292 3258 log.go:181] (0xc000b98500) (1) Data frame sent\nI0904 14:38:41.107314 3258 log.go:181] (0xc000291080) (0xc000b98500) Stream removed, broadcasting: 1\nI0904 14:38:41.107339 3258 log.go:181] (0xc000291080) Go away received\nI0904 14:38:41.107664 3258 log.go:181] (0xc000291080) (0xc000b98500) Stream removed, broadcasting: 1\nI0904 14:38:41.107678 3258 log.go:181] (0xc000291080) (0xc000c420a0) Stream removed, broadcasting: 3\nI0904 14:38:41.107684 3258 log.go:181] (0xc000291080) (0xc000b98000) Stream removed, broadcasting: 5\n" Sep 4 14:38:41.115: INFO: stdout: "" Sep 4 14:38:41.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8574 execpod-affinity8v7t2 -- /bin/sh -x -c nc -zv -t -w 2 10.110.186.214 80' Sep 4 14:38:41.349: INFO: stderr: "I0904 14:38:41.249193 3276 log.go:181] (0xc0007b5600) (0xc000742aa0) Create stream\nI0904 14:38:41.249249 3276 log.go:181] (0xc0007b5600) (0xc000742aa0) Stream added, broadcasting: 1\nI0904 14:38:41.253613 3276 log.go:181] (0xc0007b5600) Reply frame received for 1\nI0904 14:38:41.253656 3276 log.go:181] (0xc0007b5600) (0xc0007ac320) Create stream\nI0904 14:38:41.253677 3276 log.go:181] (0xc0007b5600) (0xc0007ac320) Stream added, broadcasting: 3\nI0904 14:38:41.254605 3276 log.go:181] (0xc0007b5600) Reply frame received for 3\nI0904 14:38:41.254633 3276 log.go:181] (0xc0007b5600) (0xc0009d2460) Create stream\nI0904 14:38:41.254641 3276 log.go:181] (0xc0007b5600) (0xc0009d2460) Stream added, broadcasting: 5\nI0904 14:38:41.255589 3276 log.go:181] (0xc0007b5600) Reply frame received for 5\nI0904 14:38:41.339484 3276 log.go:181] (0xc0007b5600) Data frame received for 5\nI0904 14:38:41.339515 3276 log.go:181] (0xc0009d2460) (5) Data frame handling\nI0904 14:38:41.339523 3276 log.go:181] (0xc0009d2460) (5) Data frame sent\nI0904 14:38:41.339529 3276 log.go:181] (0xc0007b5600) Data frame received for 5\nI0904 14:38:41.339533 3276 log.go:181] (0xc0009d2460) (5) Data frame handling\nI0904 14:38:41.339542 3276 log.go:181] (0xc0007b5600) Data frame received for 3\n+ nc -zv -t -w 2 10.110.186.214 80\nConnection to 10.110.186.214 80 port [tcp/http] succeeded!\nI0904 14:38:41.339546 3276 log.go:181] (0xc0007ac320) (3) Data frame handling\nI0904 14:38:41.341054 3276 log.go:181] (0xc0007b5600) Data frame received for 1\nI0904 14:38:41.341069 3276 log.go:181] (0xc000742aa0) (1) Data frame handling\nI0904 14:38:41.341075 3276 log.go:181] (0xc000742aa0) (1) Data frame sent\nI0904 14:38:41.341086 3276 log.go:181] (0xc0007b5600) (0xc000742aa0) Stream removed, broadcasting: 1\nI0904 14:38:41.341138 3276 log.go:181] (0xc0007b5600) Go away received\nI0904 14:38:41.341339 3276 log.go:181] (0xc0007b5600) (0xc000742aa0) Stream removed, broadcasting: 1\nI0904 14:38:41.341350 3276 log.go:181] (0xc0007b5600) (0xc0007ac320) Stream removed, broadcasting: 3\nI0904 14:38:41.341356 3276 log.go:181] (0xc0007b5600) (0xc0009d2460) Stream removed, broadcasting: 5\n" Sep 4 14:38:41.349: INFO: stdout: "" Sep 4 14:38:41.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8574 execpod-affinity8v7t2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.186.214:80/ ; done' Sep 4 14:38:41.687: INFO: stderr: "I0904 14:38:41.483406 3294 log.go:181] (0xc00003b290) (0xc000852820) Create stream\nI0904 14:38:41.483491 3294 log.go:181] (0xc00003b290) (0xc000852820) Stream added, broadcasting: 1\nI0904 14:38:41.485748 3294 log.go:181] (0xc00003b290) Reply frame received for 1\nI0904 14:38:41.485817 3294 log.go:181] (0xc00003b290) (0xc000564280) Create stream\nI0904 14:38:41.485845 3294 log.go:181] (0xc00003b290) (0xc000564280) Stream added, broadcasting: 3\nI0904 14:38:41.486857 3294 log.go:181] (0xc00003b290) Reply frame received for 3\nI0904 14:38:41.486883 3294 log.go:181] (0xc00003b290) (0xc000a7a1e0) Create stream\nI0904 14:38:41.486892 3294 log.go:181] (0xc00003b290) (0xc000a7a1e0) Stream added, broadcasting: 5\nI0904 14:38:41.488070 3294 log.go:181] (0xc00003b290) Reply frame received for 5\nI0904 14:38:41.551416 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.551436 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.551448 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.551558 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.551579 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.551600 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.554726 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.554743 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.554762 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.555359 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.555379 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.555394 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.555415 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.555426 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.555444 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.563226 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.563244 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.563254 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.563569 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.563584 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.563591 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.563599 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.563603 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.563608 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.570227 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.570242 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.570252 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.570705 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.570724 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.570732 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.570743 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.570754 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.570768 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.576621 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.576647 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.576665 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.577123 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.577134 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.577140 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.577155 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.577180 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.577199 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.583306 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.583329 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.583346 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.583901 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.583922 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.583932 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.583956 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.583971 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.583982 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.590952 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.590965 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.590974 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.591427 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.591447 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.591456 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.591526 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.591545 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.591564 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.597807 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.597825 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.597835 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.598884 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.598904 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.598918 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.599054 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.599069 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.599076 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.603735 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.603746 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.603752 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.604538 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.604562 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.604571 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.604591 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.604615 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.604639 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.615102 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.615121 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.615134 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.615861 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.615891 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.615902 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.615912 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.615918 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.615923 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.629775 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.629792 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.629805 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.630510 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.630552 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.630568 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.630579 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.630585 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.630590 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.638984 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.638994 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.639000 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.639649 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.639661 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.639669 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.639959 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.639975 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.639991 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.649545 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.649565 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.649575 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.650049 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.650061 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.650069 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.650434 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.650450 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.650459 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.656602 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.656616 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.656626 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.657222 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.657241 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.657255 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.657293 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.657312 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.657329 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.668922 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.668948 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.668965 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.669306 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.669323 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.669335 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.669345 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.669350 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.669355 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.672616 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.672632 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.672648 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.673221 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.673250 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.673308 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.673334 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.673351 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.673372 3294 log.go:181] (0xc000a7a1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.186.214:80/\nI0904 14:38:41.677070 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.677092 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.677112 3294 log.go:181] (0xc000564280) (3) Data frame sent\nI0904 14:38:41.677698 3294 log.go:181] (0xc00003b290) Data frame received for 5\nI0904 14:38:41.677718 3294 log.go:181] (0xc000a7a1e0) (5) Data frame handling\nI0904 14:38:41.677801 3294 log.go:181] (0xc00003b290) Data frame received for 3\nI0904 14:38:41.677817 3294 log.go:181] (0xc000564280) (3) Data frame handling\nI0904 14:38:41.679215 3294 log.go:181] (0xc00003b290) Data frame received for 1\nI0904 14:38:41.679237 3294 log.go:181] (0xc000852820) (1) Data frame handling\nI0904 14:38:41.679262 3294 log.go:181] (0xc000852820) (1) Data frame sent\nI0904 14:38:41.679282 3294 log.go:181] (0xc00003b290) (0xc000852820) Stream removed, broadcasting: 1\nI0904 14:38:41.679306 3294 log.go:181] (0xc00003b290) Go away received\nI0904 14:38:41.679637 3294 log.go:181] (0xc00003b290) (0xc000852820) Stream removed, broadcasting: 1\nI0904 14:38:41.679657 3294 log.go:181] (0xc00003b290) (0xc000564280) Stream removed, broadcasting: 3\nI0904 14:38:41.679673 3294 log.go:181] (0xc00003b290) (0xc000a7a1e0) Stream removed, broadcasting: 5\n" Sep 4 14:38:41.687: INFO: stdout: "\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l\naffinity-clusterip-6xj7l" Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Received response from host: affinity-clusterip-6xj7l Sep 4 14:38:41.687: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8574, will wait for the garbage collector to delete the pods Sep 4 14:38:41.791: INFO: Deleting ReplicationController affinity-clusterip took: 9.428836ms Sep 4 14:38:42.291: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.213716ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:38:59.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8574" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:33.273 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":257,"skipped":4146,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:38:59.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 4 14:39:14.096: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.096: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.134524 7 log.go:181] (0xc002df42c0) (0xc002128c80) Create stream I0904 14:39:14.134559 7 log.go:181] (0xc002df42c0) (0xc002128c80) Stream added, broadcasting: 1 I0904 14:39:14.136547 7 log.go:181] (0xc002df42c0) Reply frame received for 1 I0904 14:39:14.136585 7 log.go:181] (0xc002df42c0) (0xc0022821e0) Create stream I0904 14:39:14.136604 7 log.go:181] (0xc002df42c0) (0xc0022821e0) Stream added, broadcasting: 3 I0904 14:39:14.137558 7 log.go:181] (0xc002df42c0) Reply frame received for 3 I0904 14:39:14.137590 7 log.go:181] (0xc002df42c0) (0xc0033c2320) Create stream I0904 14:39:14.137601 7 log.go:181] (0xc002df42c0) (0xc0033c2320) Stream added, broadcasting: 5 I0904 14:39:14.138426 7 log.go:181] (0xc002df42c0) Reply frame received for 5 I0904 14:39:14.195393 7 log.go:181] (0xc002df42c0) Data frame received for 3 I0904 14:39:14.195418 7 log.go:181] (0xc0022821e0) (3) Data frame handling I0904 14:39:14.195425 7 log.go:181] (0xc0022821e0) (3) Data frame sent I0904 14:39:14.195430 7 log.go:181] (0xc002df42c0) Data frame received for 3 I0904 14:39:14.195434 7 log.go:181] (0xc0022821e0) (3) Data frame handling I0904 14:39:14.195450 7 log.go:181] (0xc002df42c0) Data frame received for 5 I0904 14:39:14.195457 7 log.go:181] (0xc0033c2320) (5) Data frame handling I0904 14:39:14.197234 7 log.go:181] (0xc002df42c0) Data frame received for 1 I0904 14:39:14.197267 7 log.go:181] (0xc002128c80) (1) Data frame handling I0904 14:39:14.197291 7 log.go:181] (0xc002128c80) (1) Data frame sent I0904 14:39:14.197322 7 log.go:181] (0xc002df42c0) (0xc002128c80) Stream removed, broadcasting: 1 I0904 14:39:14.197351 7 log.go:181] (0xc002df42c0) Go away received I0904 14:39:14.197492 7 log.go:181] (0xc002df42c0) (0xc002128c80) Stream removed, broadcasting: 1 I0904 14:39:14.197528 7 log.go:181] (0xc002df42c0) (0xc0022821e0) Stream removed, broadcasting: 3 I0904 14:39:14.197559 7 log.go:181] (0xc002df42c0) (0xc0033c2320) Stream removed, broadcasting: 5 Sep 4 14:39:14.197: INFO: Exec stderr: "" Sep 4 14:39:14.197: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.197: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.232247 7 log.go:181] (0xc002eae210) (0xc002282460) Create stream I0904 14:39:14.232276 7 log.go:181] (0xc002eae210) (0xc002282460) Stream added, broadcasting: 1 I0904 14:39:14.234862 7 log.go:181] (0xc002eae210) Reply frame received for 1 I0904 14:39:14.234954 7 log.go:181] (0xc002eae210) (0xc0033c23c0) Create stream I0904 14:39:14.234981 7 log.go:181] (0xc002eae210) (0xc0033c23c0) Stream added, broadcasting: 3 I0904 14:39:14.236052 7 log.go:181] (0xc002eae210) Reply frame received for 3 I0904 14:39:14.236103 7 log.go:181] (0xc002eae210) (0xc0037aa000) Create stream I0904 14:39:14.236118 7 log.go:181] (0xc002eae210) (0xc0037aa000) Stream added, broadcasting: 5 I0904 14:39:14.238921 7 log.go:181] (0xc002eae210) Reply frame received for 5 I0904 14:39:14.313127 7 log.go:181] (0xc002eae210) Data frame received for 5 I0904 14:39:14.313157 7 log.go:181] (0xc0037aa000) (5) Data frame handling I0904 14:39:14.313174 7 log.go:181] (0xc002eae210) Data frame received for 3 I0904 14:39:14.313180 7 log.go:181] (0xc0033c23c0) (3) Data frame handling I0904 14:39:14.313195 7 log.go:181] (0xc0033c23c0) (3) Data frame sent I0904 14:39:14.313204 7 log.go:181] (0xc002eae210) Data frame received for 3 I0904 14:39:14.313212 7 log.go:181] (0xc0033c23c0) (3) Data frame handling I0904 14:39:14.314054 7 log.go:181] (0xc002eae210) Data frame received for 1 I0904 14:39:14.314099 7 log.go:181] (0xc002282460) (1) Data frame handling I0904 14:39:14.314123 7 log.go:181] (0xc002282460) (1) Data frame sent I0904 14:39:14.314194 7 log.go:181] (0xc002eae210) (0xc002282460) Stream removed, broadcasting: 1 I0904 14:39:14.314240 7 log.go:181] (0xc002eae210) Go away received I0904 14:39:14.314295 7 log.go:181] (0xc002eae210) (0xc002282460) Stream removed, broadcasting: 1 I0904 14:39:14.314313 7 log.go:181] (0xc002eae210) (0xc0033c23c0) Stream removed, broadcasting: 3 I0904 14:39:14.314330 7 log.go:181] (0xc002eae210) (0xc0037aa000) Stream removed, broadcasting: 5 Sep 4 14:39:14.314: INFO: Exec stderr: "" Sep 4 14:39:14.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.314: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.355753 7 log.go:181] (0xc006540420) (0xc001cd0280) Create stream I0904 14:39:14.355789 7 log.go:181] (0xc006540420) (0xc001cd0280) Stream added, broadcasting: 1 I0904 14:39:14.358181 7 log.go:181] (0xc006540420) Reply frame received for 1 I0904 14:39:14.358209 7 log.go:181] (0xc006540420) (0xc0037aa3c0) Create stream I0904 14:39:14.358218 7 log.go:181] (0xc006540420) (0xc0037aa3c0) Stream added, broadcasting: 3 I0904 14:39:14.359247 7 log.go:181] (0xc006540420) Reply frame received for 3 I0904 14:39:14.359280 7 log.go:181] (0xc006540420) (0xc002282500) Create stream I0904 14:39:14.359294 7 log.go:181] (0xc006540420) (0xc002282500) Stream added, broadcasting: 5 I0904 14:39:14.360299 7 log.go:181] (0xc006540420) Reply frame received for 5 I0904 14:39:14.437327 7 log.go:181] (0xc006540420) Data frame received for 5 I0904 14:39:14.437356 7 log.go:181] (0xc002282500) (5) Data frame handling I0904 14:39:14.437378 7 log.go:181] (0xc006540420) Data frame received for 3 I0904 14:39:14.437391 7 log.go:181] (0xc0037aa3c0) (3) Data frame handling I0904 14:39:14.437406 7 log.go:181] (0xc0037aa3c0) (3) Data frame sent I0904 14:39:14.437430 7 log.go:181] (0xc006540420) Data frame received for 3 I0904 14:39:14.437435 7 log.go:181] (0xc0037aa3c0) (3) Data frame handling I0904 14:39:14.438726 7 log.go:181] (0xc006540420) Data frame received for 1 I0904 14:39:14.438757 7 log.go:181] (0xc001cd0280) (1) Data frame handling I0904 14:39:14.438776 7 log.go:181] (0xc001cd0280) (1) Data frame sent I0904 14:39:14.438800 7 log.go:181] (0xc006540420) (0xc001cd0280) Stream removed, broadcasting: 1 I0904 14:39:14.438821 7 log.go:181] (0xc006540420) Go away received I0904 14:39:14.438936 7 log.go:181] (0xc006540420) (0xc001cd0280) Stream removed, broadcasting: 1 I0904 14:39:14.438960 7 log.go:181] (0xc006540420) (0xc0037aa3c0) Stream removed, broadcasting: 3 I0904 14:39:14.438972 7 log.go:181] (0xc006540420) (0xc002282500) Stream removed, broadcasting: 5 Sep 4 14:39:14.438: INFO: Exec stderr: "" Sep 4 14:39:14.439: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.439: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.467217 7 log.go:181] (0xc006540bb0) (0xc001cd05a0) Create stream I0904 14:39:14.467247 7 log.go:181] (0xc006540bb0) (0xc001cd05a0) Stream added, broadcasting: 1 I0904 14:39:14.469011 7 log.go:181] (0xc006540bb0) Reply frame received for 1 I0904 14:39:14.469047 7 log.go:181] (0xc006540bb0) (0xc0022825a0) Create stream I0904 14:39:14.469059 7 log.go:181] (0xc006540bb0) (0xc0022825a0) Stream added, broadcasting: 3 I0904 14:39:14.469630 7 log.go:181] (0xc006540bb0) Reply frame received for 3 I0904 14:39:14.469653 7 log.go:181] (0xc006540bb0) (0xc002322140) Create stream I0904 14:39:14.469662 7 log.go:181] (0xc006540bb0) (0xc002322140) Stream added, broadcasting: 5 I0904 14:39:14.470276 7 log.go:181] (0xc006540bb0) Reply frame received for 5 I0904 14:39:14.537056 7 log.go:181] (0xc006540bb0) Data frame received for 3 I0904 14:39:14.537141 7 log.go:181] (0xc0022825a0) (3) Data frame handling I0904 14:39:14.537175 7 log.go:181] (0xc0022825a0) (3) Data frame sent I0904 14:39:14.537190 7 log.go:181] (0xc006540bb0) Data frame received for 3 I0904 14:39:14.537204 7 log.go:181] (0xc0022825a0) (3) Data frame handling I0904 14:39:14.537223 7 log.go:181] (0xc006540bb0) Data frame received for 5 I0904 14:39:14.537240 7 log.go:181] (0xc002322140) (5) Data frame handling I0904 14:39:14.538456 7 log.go:181] (0xc006540bb0) Data frame received for 1 I0904 14:39:14.538478 7 log.go:181] (0xc001cd05a0) (1) Data frame handling I0904 14:39:14.538497 7 log.go:181] (0xc001cd05a0) (1) Data frame sent I0904 14:39:14.538626 7 log.go:181] (0xc006540bb0) (0xc001cd05a0) Stream removed, broadcasting: 1 I0904 14:39:14.538718 7 log.go:181] (0xc006540bb0) (0xc001cd05a0) Stream removed, broadcasting: 1 I0904 14:39:14.538751 7 log.go:181] (0xc006540bb0) (0xc0022825a0) Stream removed, broadcasting: 3 I0904 14:39:14.538778 7 log.go:181] (0xc006540bb0) (0xc002322140) Stream removed, broadcasting: 5 Sep 4 14:39:14.538: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 4 14:39:14.538: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0904 14:39:14.538846 7 log.go:181] (0xc006540bb0) Go away received Sep 4 14:39:14.538: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.567753 7 log.go:181] (0xc002df4dc0) (0xc0037aa6e0) Create stream I0904 14:39:14.567794 7 log.go:181] (0xc002df4dc0) (0xc0037aa6e0) Stream added, broadcasting: 1 I0904 14:39:14.571210 7 log.go:181] (0xc002df4dc0) Reply frame received for 1 I0904 14:39:14.571270 7 log.go:181] (0xc002df4dc0) (0xc0037aa820) Create stream I0904 14:39:14.571294 7 log.go:181] (0xc002df4dc0) (0xc0037aa820) Stream added, broadcasting: 3 I0904 14:39:14.574111 7 log.go:181] (0xc002df4dc0) Reply frame received for 3 I0904 14:39:14.574198 7 log.go:181] (0xc002df4dc0) (0xc002282640) Create stream I0904 14:39:14.574226 7 log.go:181] (0xc002df4dc0) (0xc002282640) Stream added, broadcasting: 5 I0904 14:39:14.576468 7 log.go:181] (0xc002df4dc0) Reply frame received for 5 I0904 14:39:14.644360 7 log.go:181] (0xc002df4dc0) Data frame received for 5 I0904 14:39:14.644399 7 log.go:181] (0xc002282640) (5) Data frame handling I0904 14:39:14.644429 7 log.go:181] (0xc002df4dc0) Data frame received for 3 I0904 14:39:14.644440 7 log.go:181] (0xc0037aa820) (3) Data frame handling I0904 14:39:14.644452 7 log.go:181] (0xc0037aa820) (3) Data frame sent I0904 14:39:14.644460 7 log.go:181] (0xc002df4dc0) Data frame received for 3 I0904 14:39:14.644467 7 log.go:181] (0xc0037aa820) (3) Data frame handling I0904 14:39:14.645573 7 log.go:181] (0xc002df4dc0) Data frame received for 1 I0904 14:39:14.645592 7 log.go:181] (0xc0037aa6e0) (1) Data frame handling I0904 14:39:14.645602 7 log.go:181] (0xc0037aa6e0) (1) Data frame sent I0904 14:39:14.645609 7 log.go:181] (0xc002df4dc0) (0xc0037aa6e0) Stream removed, broadcasting: 1 I0904 14:39:14.645623 7 log.go:181] (0xc002df4dc0) Go away received I0904 14:39:14.645720 7 log.go:181] (0xc002df4dc0) (0xc0037aa6e0) Stream removed, broadcasting: 1 I0904 14:39:14.645736 7 log.go:181] (0xc002df4dc0) (0xc0037aa820) Stream removed, broadcasting: 3 I0904 14:39:14.645743 7 log.go:181] (0xc002df4dc0) (0xc002282640) Stream removed, broadcasting: 5 Sep 4 14:39:14.645: INFO: Exec stderr: "" Sep 4 14:39:14.645: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.645: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.669493 7 log.go:181] (0xc002eae8f0) (0xc0022828c0) Create stream I0904 14:39:14.669513 7 log.go:181] (0xc002eae8f0) (0xc0022828c0) Stream added, broadcasting: 1 I0904 14:39:14.671229 7 log.go:181] (0xc002eae8f0) Reply frame received for 1 I0904 14:39:14.671255 7 log.go:181] (0xc002eae8f0) (0xc0023221e0) Create stream I0904 14:39:14.671265 7 log.go:181] (0xc002eae8f0) (0xc0023221e0) Stream added, broadcasting: 3 I0904 14:39:14.672064 7 log.go:181] (0xc002eae8f0) Reply frame received for 3 I0904 14:39:14.672113 7 log.go:181] (0xc002eae8f0) (0xc0033c2500) Create stream I0904 14:39:14.672128 7 log.go:181] (0xc002eae8f0) (0xc0033c2500) Stream added, broadcasting: 5 I0904 14:39:14.673274 7 log.go:181] (0xc002eae8f0) Reply frame received for 5 I0904 14:39:14.749151 7 log.go:181] (0xc002eae8f0) Data frame received for 5 I0904 14:39:14.749189 7 log.go:181] (0xc0033c2500) (5) Data frame handling I0904 14:39:14.749216 7 log.go:181] (0xc002eae8f0) Data frame received for 3 I0904 14:39:14.749227 7 log.go:181] (0xc0023221e0) (3) Data frame handling I0904 14:39:14.749239 7 log.go:181] (0xc0023221e0) (3) Data frame sent I0904 14:39:14.749255 7 log.go:181] (0xc002eae8f0) Data frame received for 3 I0904 14:39:14.749267 7 log.go:181] (0xc0023221e0) (3) Data frame handling I0904 14:39:14.750848 7 log.go:181] (0xc002eae8f0) Data frame received for 1 I0904 14:39:14.750872 7 log.go:181] (0xc0022828c0) (1) Data frame handling I0904 14:39:14.750895 7 log.go:181] (0xc0022828c0) (1) Data frame sent I0904 14:39:14.750914 7 log.go:181] (0xc002eae8f0) (0xc0022828c0) Stream removed, broadcasting: 1 I0904 14:39:14.750984 7 log.go:181] (0xc002eae8f0) (0xc0022828c0) Stream removed, broadcasting: 1 I0904 14:39:14.751002 7 log.go:181] (0xc002eae8f0) (0xc0023221e0) Stream removed, broadcasting: 3 I0904 14:39:14.751016 7 log.go:181] (0xc002eae8f0) (0xc0033c2500) Stream removed, broadcasting: 5 I0904 14:39:14.751036 7 log.go:181] (0xc002eae8f0) Go away received Sep 4 14:39:14.751: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 4 14:39:14.751: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.751: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.778375 7 log.go:181] (0xc006540f20) (0xc001cd0960) Create stream I0904 14:39:14.778406 7 log.go:181] (0xc006540f20) (0xc001cd0960) Stream added, broadcasting: 1 I0904 14:39:14.780395 7 log.go:181] (0xc006540f20) Reply frame received for 1 I0904 14:39:14.780421 7 log.go:181] (0xc006540f20) (0xc001cd0a00) Create stream I0904 14:39:14.780433 7 log.go:181] (0xc006540f20) (0xc001cd0a00) Stream added, broadcasting: 3 I0904 14:39:14.781487 7 log.go:181] (0xc006540f20) Reply frame received for 3 I0904 14:39:14.781517 7 log.go:181] (0xc006540f20) (0xc0037aa8c0) Create stream I0904 14:39:14.781529 7 log.go:181] (0xc006540f20) (0xc0037aa8c0) Stream added, broadcasting: 5 I0904 14:39:14.782334 7 log.go:181] (0xc006540f20) Reply frame received for 5 I0904 14:39:14.845697 7 log.go:181] (0xc006540f20) Data frame received for 3 I0904 14:39:14.845724 7 log.go:181] (0xc001cd0a00) (3) Data frame handling I0904 14:39:14.845731 7 log.go:181] (0xc001cd0a00) (3) Data frame sent I0904 14:39:14.845736 7 log.go:181] (0xc006540f20) Data frame received for 3 I0904 14:39:14.845755 7 log.go:181] (0xc006540f20) Data frame received for 5 I0904 14:39:14.845791 7 log.go:181] (0xc0037aa8c0) (5) Data frame handling I0904 14:39:14.845817 7 log.go:181] (0xc001cd0a00) (3) Data frame handling I0904 14:39:14.847025 7 log.go:181] (0xc006540f20) Data frame received for 1 I0904 14:39:14.847060 7 log.go:181] (0xc001cd0960) (1) Data frame handling I0904 14:39:14.847091 7 log.go:181] (0xc001cd0960) (1) Data frame sent I0904 14:39:14.847122 7 log.go:181] (0xc006540f20) (0xc001cd0960) Stream removed, broadcasting: 1 I0904 14:39:14.847159 7 log.go:181] (0xc006540f20) Go away received I0904 14:39:14.847222 7 log.go:181] (0xc006540f20) (0xc001cd0960) Stream removed, broadcasting: 1 I0904 14:39:14.847233 7 log.go:181] (0xc006540f20) (0xc001cd0a00) Stream removed, broadcasting: 3 I0904 14:39:14.847241 7 log.go:181] (0xc006540f20) (0xc0037aa8c0) Stream removed, broadcasting: 5 Sep 4 14:39:14.847: INFO: Exec stderr: "" Sep 4 14:39:14.847: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.847: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.877491 7 log.go:181] (0xc002df53f0) (0xc0037aab40) Create stream I0904 14:39:14.877519 7 log.go:181] (0xc002df53f0) (0xc0037aab40) Stream added, broadcasting: 1 I0904 14:39:14.879655 7 log.go:181] (0xc002df53f0) Reply frame received for 1 I0904 14:39:14.879700 7 log.go:181] (0xc002df53f0) (0xc002282960) Create stream I0904 14:39:14.879719 7 log.go:181] (0xc002df53f0) (0xc002282960) Stream added, broadcasting: 3 I0904 14:39:14.880928 7 log.go:181] (0xc002df53f0) Reply frame received for 3 I0904 14:39:14.880958 7 log.go:181] (0xc002df53f0) (0xc002282a00) Create stream I0904 14:39:14.880983 7 log.go:181] (0xc002df53f0) (0xc002282a00) Stream added, broadcasting: 5 I0904 14:39:14.882156 7 log.go:181] (0xc002df53f0) Reply frame received for 5 I0904 14:39:14.941421 7 log.go:181] (0xc002df53f0) Data frame received for 3 I0904 14:39:14.941452 7 log.go:181] (0xc002282960) (3) Data frame handling I0904 14:39:14.941460 7 log.go:181] (0xc002282960) (3) Data frame sent I0904 14:39:14.941465 7 log.go:181] (0xc002df53f0) Data frame received for 3 I0904 14:39:14.941471 7 log.go:181] (0xc002282960) (3) Data frame handling I0904 14:39:14.941555 7 log.go:181] (0xc002df53f0) Data frame received for 5 I0904 14:39:14.941594 7 log.go:181] (0xc002282a00) (5) Data frame handling I0904 14:39:14.942372 7 log.go:181] (0xc002df53f0) Data frame received for 1 I0904 14:39:14.942388 7 log.go:181] (0xc0037aab40) (1) Data frame handling I0904 14:39:14.942406 7 log.go:181] (0xc0037aab40) (1) Data frame sent I0904 14:39:14.942419 7 log.go:181] (0xc002df53f0) (0xc0037aab40) Stream removed, broadcasting: 1 I0904 14:39:14.942429 7 log.go:181] (0xc002df53f0) Go away received I0904 14:39:14.942545 7 log.go:181] (0xc002df53f0) (0xc0037aab40) Stream removed, broadcasting: 1 I0904 14:39:14.942575 7 log.go:181] (0xc002df53f0) (0xc002282960) Stream removed, broadcasting: 3 I0904 14:39:14.942596 7 log.go:181] (0xc002df53f0) (0xc002282a00) Stream removed, broadcasting: 5 Sep 4 14:39:14.942: INFO: Exec stderr: "" Sep 4 14:39:14.942: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:14.942: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:14.971317 7 log.go:181] (0xc0049da4d0) (0xc0033c2960) Create stream I0904 14:39:14.971356 7 log.go:181] (0xc0049da4d0) (0xc0033c2960) Stream added, broadcasting: 1 I0904 14:39:14.973245 7 log.go:181] (0xc0049da4d0) Reply frame received for 1 I0904 14:39:14.973278 7 log.go:181] (0xc0049da4d0) (0xc0023223c0) Create stream I0904 14:39:14.973295 7 log.go:181] (0xc0049da4d0) (0xc0023223c0) Stream added, broadcasting: 3 I0904 14:39:14.974272 7 log.go:181] (0xc0049da4d0) Reply frame received for 3 I0904 14:39:14.974301 7 log.go:181] (0xc0049da4d0) (0xc0033c2a00) Create stream I0904 14:39:14.974313 7 log.go:181] (0xc0049da4d0) (0xc0033c2a00) Stream added, broadcasting: 5 I0904 14:39:14.975145 7 log.go:181] (0xc0049da4d0) Reply frame received for 5 I0904 14:39:15.033697 7 log.go:181] (0xc0049da4d0) Data frame received for 3 I0904 14:39:15.033726 7 log.go:181] (0xc0023223c0) (3) Data frame handling I0904 14:39:15.033742 7 log.go:181] (0xc0023223c0) (3) Data frame sent I0904 14:39:15.033754 7 log.go:181] (0xc0049da4d0) Data frame received for 3 I0904 14:39:15.033763 7 log.go:181] (0xc0023223c0) (3) Data frame handling I0904 14:39:15.033784 7 log.go:181] (0xc0049da4d0) Data frame received for 5 I0904 14:39:15.033798 7 log.go:181] (0xc0033c2a00) (5) Data frame handling I0904 14:39:15.035115 7 log.go:181] (0xc0049da4d0) Data frame received for 1 I0904 14:39:15.035140 7 log.go:181] (0xc0033c2960) (1) Data frame handling I0904 14:39:15.035156 7 log.go:181] (0xc0033c2960) (1) Data frame sent I0904 14:39:15.035171 7 log.go:181] (0xc0049da4d0) (0xc0033c2960) Stream removed, broadcasting: 1 I0904 14:39:15.035196 7 log.go:181] (0xc0049da4d0) Go away received I0904 14:39:15.035312 7 log.go:181] (0xc0049da4d0) (0xc0033c2960) Stream removed, broadcasting: 1 I0904 14:39:15.035339 7 log.go:181] (0xc0049da4d0) (0xc0023223c0) Stream removed, broadcasting: 3 I0904 14:39:15.035352 7 log.go:181] (0xc0049da4d0) (0xc0033c2a00) Stream removed, broadcasting: 5 Sep 4 14:39:15.035: INFO: Exec stderr: "" Sep 4 14:39:15.035: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7020 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:39:15.035: INFO: >>> kubeConfig: /root/.kube/config I0904 14:39:15.063903 7 log.go:181] (0xc0049dab00) (0xc0033c2c80) Create stream I0904 14:39:15.063943 7 log.go:181] (0xc0049dab00) (0xc0033c2c80) Stream added, broadcasting: 1 I0904 14:39:15.070843 7 log.go:181] (0xc0049dab00) Reply frame received for 1 I0904 14:39:15.070906 7 log.go:181] (0xc0049dab00) (0xc0033c2d20) Create stream I0904 14:39:15.070920 7 log.go:181] (0xc0049dab00) (0xc0033c2d20) Stream added, broadcasting: 3 I0904 14:39:15.071726 7 log.go:181] (0xc0049dab00) Reply frame received for 3 I0904 14:39:15.071763 7 log.go:181] (0xc0049dab00) (0xc001cd0aa0) Create stream I0904 14:39:15.071772 7 log.go:181] (0xc0049dab00) (0xc001cd0aa0) Stream added, broadcasting: 5 I0904 14:39:15.072408 7 log.go:181] (0xc0049dab00) Reply frame received for 5 I0904 14:39:15.135008 7 log.go:181] (0xc0049dab00) Data frame received for 3 I0904 14:39:15.135054 7 log.go:181] (0xc0033c2d20) (3) Data frame handling I0904 14:39:15.135072 7 log.go:181] (0xc0033c2d20) (3) Data frame sent I0904 14:39:15.135082 7 log.go:181] (0xc0049dab00) Data frame received for 3 I0904 14:39:15.135089 7 log.go:181] (0xc0033c2d20) (3) Data frame handling I0904 14:39:15.135116 7 log.go:181] (0xc0049dab00) Data frame received for 5 I0904 14:39:15.135126 7 log.go:181] (0xc001cd0aa0) (5) Data frame handling I0904 14:39:15.136611 7 log.go:181] (0xc0049dab00) Data frame received for 1 I0904 14:39:15.136635 7 log.go:181] (0xc0033c2c80) (1) Data frame handling I0904 14:39:15.136657 7 log.go:181] (0xc0033c2c80) (1) Data frame sent I0904 14:39:15.136751 7 log.go:181] (0xc0049dab00) (0xc0033c2c80) Stream removed, broadcasting: 1 I0904 14:39:15.136857 7 log.go:181] (0xc0049dab00) (0xc0033c2c80) Stream removed, broadcasting: 1 I0904 14:39:15.136867 7 log.go:181] (0xc0049dab00) (0xc0033c2d20) Stream removed, broadcasting: 3 I0904 14:39:15.136925 7 log.go:181] (0xc0049dab00) Go away received I0904 14:39:15.136972 7 log.go:181] (0xc0049dab00) (0xc001cd0aa0) Stream removed, broadcasting: 5 Sep 4 14:39:15.136: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:39:15.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7020" for this suite. • [SLOW TEST:15.255 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:39:15.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0904 14:39:25.258398 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:40:27.278: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:40:27.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-758" for this suite. • [SLOW TEST:72.140 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":259,"skipped":4181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:40:27.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 4 14:40:27.421: INFO: Waiting up to 5m0s for pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256" in namespace "emptydir-1378" to be "Succeeded or Failed" Sep 4 14:40:27.457: INFO: Pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256": Phase="Pending", Reason="", readiness=false. Elapsed: 36.830857ms Sep 4 14:40:29.462: INFO: Pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041379604s Sep 4 14:40:31.466: INFO: Pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256": Phase="Running", Reason="", readiness=true. Elapsed: 4.045912934s Sep 4 14:40:33.470: INFO: Pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049100538s STEP: Saw pod success Sep 4 14:40:33.470: INFO: Pod "pod-ac23dd25-651f-43dc-b876-12b7287d8256" satisfied condition "Succeeded or Failed" Sep 4 14:40:33.471: INFO: Trying to get logs from node latest-worker pod pod-ac23dd25-651f-43dc-b876-12b7287d8256 container test-container: STEP: delete the pod Sep 4 14:40:33.522: INFO: Waiting for pod pod-ac23dd25-651f-43dc-b876-12b7287d8256 to disappear Sep 4 14:40:33.528: INFO: Pod pod-ac23dd25-651f-43dc-b876-12b7287d8256 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:40:33.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1378" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:40:33.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 4 14:40:33.606: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829584 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:40:33.606: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829584 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 4 14:40:43.614: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829623 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:40:43.615: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829623 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 4 14:40:53.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829653 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:40:53.625: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829653 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 4 14:41:03.633: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829683 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:41:03.633: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-a a248e906-5622-4102-bac4-d064c8fb44be 6829683 0 2020-09-04 14:40:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-04 14:40:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 4 14:41:13.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-b 419f8d9f-9852-491a-96d1-c4c50d75783f 6829713 0 2020-09-04 14:41:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-04 14:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:41:13.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-b 419f8d9f-9852-491a-96d1-c4c50d75783f 6829713 0 2020-09-04 14:41:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-04 14:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 4 14:41:23.661: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-b 419f8d9f-9852-491a-96d1-c4c50d75783f 6829741 0 2020-09-04 14:41:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-04 14:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 4 14:41:23.661: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2564 /api/v1/namespaces/watch-2564/configmaps/e2e-watch-test-configmap-b 419f8d9f-9852-491a-96d1-c4c50d75783f 6829741 0 2020-09-04 14:41:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-04 14:41:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:41:33.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2564" for this suite. • [SLOW TEST:60.136 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":261,"skipped":4242,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:41:33.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:41:33.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5" in namespace "downward-api-9710" to be "Succeeded or Failed" Sep 4 14:41:33.787: INFO: Pod "downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027717ms Sep 4 14:41:36.150: INFO: Pod "downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37033809s Sep 4 14:41:38.156: INFO: Pod "downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.375819807s STEP: Saw pod success Sep 4 14:41:38.156: INFO: Pod "downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5" satisfied condition "Succeeded or Failed" Sep 4 14:41:38.160: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5 container client-container: STEP: delete the pod Sep 4 14:41:38.423: INFO: Waiting for pod downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5 to disappear Sep 4 14:41:38.559: INFO: Pod downwardapi-volume-1f18d31c-cef7-40b8-b385-22e287782ad5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:41:38.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9710" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4255,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:41:38.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-d81b9e46-abf1-468e-a6e9-34ae6aeb3758 STEP: Creating a pod to test consume secrets Sep 4 14:41:38.643: INFO: Waiting up to 5m0s for pod "pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd" in namespace "secrets-2454" to be "Succeeded or Failed" Sep 4 14:41:38.647: INFO: Pod "pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.853073ms Sep 4 14:41:40.790: INFO: Pod "pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147600943s Sep 4 14:41:42.794: INFO: Pod "pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150891156s STEP: Saw pod success Sep 4 14:41:42.794: INFO: Pod "pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd" satisfied condition "Succeeded or Failed" Sep 4 14:41:42.796: INFO: Trying to get logs from node latest-worker pod pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd container secret-volume-test: STEP: delete the pod Sep 4 14:41:42.850: INFO: Waiting for pod pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd to disappear Sep 4 14:41:42.854: INFO: Pod pod-secrets-d999be1a-157d-4182-a974-9566cf2cf7fd no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:41:42.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2454" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:41:42.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:41:43.055: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 4 14:41:48.059: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 4 14:41:48.059: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 4 14:41:48.165: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8482 /apis/apps/v1/namespaces/deployment-8482/deployments/test-cleanup-deployment 97075900-4345-4b92-8235-3695336409d3 6829882 1 2020-09-04 14:41:48 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-09-04 14:41:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004661f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Sep 4 14:41:48.230: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-8482 /apis/apps/v1/namespaces/deployment-8482/replicasets/test-cleanup-deployment-5d446bdd47 fe99b763-e126-41f0-a61a-541a6c1f9299 6829885 1 2020-09-04 14:41:48 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 97075900-4345-4b92-8235-3695336409d3 0xc004c94897 0xc004c94898}] [] [{kube-controller-manager Update apps/v1 2020-09-04 14:41:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97075900-4345-4b92-8235-3695336409d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004c94928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 14:41:48.230: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 4 14:41:48.231: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8482 /apis/apps/v1/namespaces/deployment-8482/replicasets/test-cleanup-controller 73b17acb-e691-4fda-a734-c1c9b5dfe4e0 6829884 1 2020-09-04 14:41:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 97075900-4345-4b92-8235-3695336409d3 0xc004c9477f 0xc004c94790}] [] [{e2e.test Update apps/v1 2020-09-04 14:41:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 14:41:48 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"97075900-4345-4b92-8235-3695336409d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004c94828 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 4 14:41:48.327: INFO: Pod "test-cleanup-controller-r8gl7" is available: &Pod{ObjectMeta:{test-cleanup-controller-r8gl7 test-cleanup-controller- deployment-8482 /api/v1/namespaces/deployment-8482/pods/test-cleanup-controller-r8gl7 dfdc2a5d-d30b-47d9-b4c7-313cc685c46b 6829866 0 2020-09-04 14:41:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 73b17acb-e691-4fda-a734-c1c9b5dfe4e0 0xc004c94e47 0xc004c94e48}] [] [{kube-controller-manager Update v1 2020-09-04 14:41:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"73b17acb-e691-4fda-a734-c1c9b5dfe4e0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 14:41:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gzh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gzh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gzh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:41:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:41:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:41:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.76,StartTime:2020-09-04 14:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 14:41:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c7957637fd6b7537072355db924afb9fe11618ff0597a97394d8ff638314e43,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 14:41:48.327: INFO: Pod "test-cleanup-deployment-5d446bdd47-j59qg" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-j59qg test-cleanup-deployment-5d446bdd47- deployment-8482 /api/v1/namespaces/deployment-8482/pods/test-cleanup-deployment-5d446bdd47-j59qg 77b2afa1-431b-4474-ae10-79e0d60039d3 6829889 0 2020-09-04 14:41:48 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 fe99b763-e126-41f0-a61a-541a6c1f9299 0xc004c95017 0xc004c95018}] [] [{kube-controller-manager Update v1 2020-09-04 14:41:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe99b763-e126-41f0-a61a-541a6c1f9299\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gzh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gzh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gzh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:41:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:41:48.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8482" for this suite. • [SLOW TEST:5.507 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":264,"skipped":4290,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:41:48.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:41:49.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:41:51.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:41:53.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:41:55.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827309, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:41:58.196: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:41:58.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7030-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:41:59.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3305" for this suite. STEP: Destroying namespace "webhook-3305-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.213 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":265,"skipped":4295,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:41:59.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-718fc357-4667-4029-8a10-4253021b75be STEP: Creating a pod to test consume secrets Sep 4 14:41:59.917: INFO: Waiting up to 5m0s for pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb" in namespace "secrets-2328" to be "Succeeded or Failed" Sep 4 14:41:59.920: INFO: Pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015985ms Sep 4 14:42:01.927: INFO: Pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009198358s Sep 4 14:42:03.945: INFO: Pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb": Phase="Running", Reason="", readiness=true. Elapsed: 4.027863638s Sep 4 14:42:05.949: INFO: Pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031428697s STEP: Saw pod success Sep 4 14:42:05.949: INFO: Pod "pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb" satisfied condition "Succeeded or Failed" Sep 4 14:42:05.952: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb container secret-volume-test: STEP: delete the pod Sep 4 14:42:06.019: INFO: Waiting for pod pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb to disappear Sep 4 14:42:06.060: INFO: Pod pod-secrets-266a5513-835e-4bd9-85f5-a7567344aabb no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:06.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2328" for this suite. • [SLOW TEST:6.477 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4296,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:06.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 4 14:42:06.218: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:14.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4966" for this suite. • [SLOW TEST:8.388 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:14.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 4 14:42:15.041: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 4 14:42:17.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:42:19.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827335, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:42:22.752: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:42:22.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:23.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-370" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.678 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":268,"skipped":4336,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:24.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:42:24.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341" in namespace "projected-2676" to be "Succeeded or Failed" Sep 4 14:42:24.189: INFO: Pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811983ms Sep 4 14:42:26.447: INFO: Pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260419693s Sep 4 14:42:28.451: INFO: Pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341": Phase="Running", Reason="", readiness=true. Elapsed: 4.264968496s Sep 4 14:42:30.455: INFO: Pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268828599s STEP: Saw pod success Sep 4 14:42:30.455: INFO: Pod "downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341" satisfied condition "Succeeded or Failed" Sep 4 14:42:30.458: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341 container client-container: STEP: delete the pod Sep 4 14:42:30.498: INFO: Waiting for pod downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341 to disappear Sep 4 14:42:30.515: INFO: Pod downwardapi-volume-f75df1e9-6b41-4c5e-95f2-195785701341 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:30.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2676" for this suite. • [SLOW TEST:6.387 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:30.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 4 14:42:30.615: INFO: Waiting up to 5m0s for pod "client-containers-2332f490-642a-4efd-a872-4aa7381d55b7" in namespace "containers-382" to be "Succeeded or Failed" Sep 4 14:42:30.626: INFO: Pod "client-containers-2332f490-642a-4efd-a872-4aa7381d55b7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.902275ms Sep 4 14:42:32.696: INFO: Pod "client-containers-2332f490-642a-4efd-a872-4aa7381d55b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081090783s Sep 4 14:42:34.700: INFO: Pod "client-containers-2332f490-642a-4efd-a872-4aa7381d55b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084598897s STEP: Saw pod success Sep 4 14:42:34.700: INFO: Pod "client-containers-2332f490-642a-4efd-a872-4aa7381d55b7" satisfied condition "Succeeded or Failed" Sep 4 14:42:34.703: INFO: Trying to get logs from node latest-worker pod client-containers-2332f490-642a-4efd-a872-4aa7381d55b7 container test-container: STEP: delete the pod Sep 4 14:42:34.937: INFO: Waiting for pod client-containers-2332f490-642a-4efd-a872-4aa7381d55b7 to disappear Sep 4 14:42:35.144: INFO: Pod client-containers-2332f490-642a-4efd-a872-4aa7381d55b7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:35.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-382" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:35.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 4 14:42:35.287: INFO: Waiting up to 5m0s for pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163" in namespace "downward-api-4536" to be "Succeeded or Failed" Sep 4 14:42:35.328: INFO: Pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163": Phase="Pending", Reason="", readiness=false. Elapsed: 40.734987ms Sep 4 14:42:37.442: INFO: Pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154932878s Sep 4 14:42:39.446: INFO: Pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163": Phase="Running", Reason="", readiness=true. Elapsed: 4.159287186s Sep 4 14:42:41.451: INFO: Pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.16365997s STEP: Saw pod success Sep 4 14:42:41.451: INFO: Pod "downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163" satisfied condition "Succeeded or Failed" Sep 4 14:42:41.453: INFO: Trying to get logs from node latest-worker pod downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163 container dapi-container: STEP: delete the pod Sep 4 14:42:41.491: INFO: Waiting for pod downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163 to disappear Sep 4 14:42:41.507: INFO: Pod downward-api-969944f4-1c09-45a8-8f7b-83535f4ef163 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:41.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4536" for this suite. • [SLOW TEST:6.364 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:41.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:42:41.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003" in namespace "projected-1090" to be "Succeeded or Failed" Sep 4 14:42:41.624: INFO: Pod "downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003": Phase="Pending", Reason="", readiness=false. Elapsed: 15.504912ms Sep 4 14:42:43.628: INFO: Pod "downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019135992s Sep 4 14:42:45.631: INFO: Pod "downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023042874s STEP: Saw pod success Sep 4 14:42:45.631: INFO: Pod "downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003" satisfied condition "Succeeded or Failed" Sep 4 14:42:45.635: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003 container client-container: STEP: delete the pod Sep 4 14:42:45.776: INFO: Waiting for pod downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003 to disappear Sep 4 14:42:45.792: INFO: Pod downwardapi-volume-47c44d5c-bd85-4eb6-aff0-fbf529175003 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:45.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1090" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4454,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:45.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:42:45.921: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:46.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9340" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":273,"skipped":4470,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:46.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 4 14:42:47.036: INFO: Waiting up to 5m0s for pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7" in namespace "emptydir-1731" to be "Succeeded or Failed" Sep 4 14:42:47.043: INFO: Pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543995ms Sep 4 14:42:49.047: INFO: Pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010259127s Sep 4 14:42:51.066: INFO: Pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.029546998s Sep 4 14:42:53.070: INFO: Pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033436637s STEP: Saw pod success Sep 4 14:42:53.070: INFO: Pod "pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7" satisfied condition "Succeeded or Failed" Sep 4 14:42:53.073: INFO: Trying to get logs from node latest-worker pod pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7 container test-container: STEP: delete the pod Sep 4 14:42:53.137: INFO: Waiting for pod pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7 to disappear Sep 4 14:42:53.142: INFO: Pod pod-ef3c0d43-7d2c-4813-bda8-dba14cc3f9f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:53.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1731" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4478,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:53.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-06f7238c-0c61-46e8-8d61-473b2e8a551b STEP: Creating a pod to test consume configMaps Sep 4 14:42:53.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2" in namespace "configmap-5297" to be "Succeeded or Failed" Sep 4 14:42:53.254: INFO: Pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.165953ms Sep 4 14:42:55.258: INFO: Pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02253392s Sep 4 14:42:57.263: INFO: Pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026765539s Sep 4 14:42:59.267: INFO: Pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031084345s STEP: Saw pod success Sep 4 14:42:59.267: INFO: Pod "pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2" satisfied condition "Succeeded or Failed" Sep 4 14:42:59.270: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2 container configmap-volume-test: STEP: delete the pod Sep 4 14:42:59.297: INFO: Waiting for pod pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2 to disappear Sep 4 14:42:59.348: INFO: Pod pod-configmaps-435faf1c-91e1-4c9e-9cc9-7df96a36baa2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:59.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5297" for this suite. • [SLOW TEST:6.205 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:59.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:42:59.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2001" for this suite. STEP: Destroying namespace "nspatchtest-427d3272-5b7f-44bb-83ac-602daddfe42b-9490" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":276,"skipped":4512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:42:59.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-201bd67c-d6db-4855-92e6-44d5564cf494 STEP: Creating a pod to test consume secrets Sep 4 14:42:59.641: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8" in namespace "projected-913" to be "Succeeded or Failed" Sep 4 14:42:59.649: INFO: Pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.1291ms Sep 4 14:43:01.652: INFO: Pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010732219s Sep 4 14:43:03.703: INFO: Pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061700626s Sep 4 14:43:05.706: INFO: Pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064614085s STEP: Saw pod success Sep 4 14:43:05.706: INFO: Pod "pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8" satisfied condition "Succeeded or Failed" Sep 4 14:43:05.709: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8 container projected-secret-volume-test: STEP: delete the pod Sep 4 14:43:05.741: INFO: Waiting for pod pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8 to disappear Sep 4 14:43:05.839: INFO: Pod pod-projected-secrets-5f2a27b5-1b4e-49fd-9a2d-3ddd5b0927f8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:43:05.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-913" for this suite. • [SLOW TEST:6.321 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4538,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:43:05.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 4 14:43:06.238: INFO: Waiting up to 1m0s for all nodes to be ready Sep 4 14:44:06.272: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 4 14:44:06.324: INFO: Created pod: pod0-sched-preemption-low-priority Sep 4 14:44:06.356: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:44:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9609" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.765 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":278,"skipped":4554,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:44:34.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 4 14:44:34.726: INFO: created test-podtemplate-1 Sep 4 14:44:34.772: INFO: created test-podtemplate-2 Sep 4 14:44:34.778: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 4 14:44:35.089: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 4 14:44:35.235: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:44:35.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6586" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":279,"skipped":4558,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:44:35.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 4 14:44:36.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 4 14:44:38.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:44:40.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734827476, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 4 14:44:43.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:44:55.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8816" for this suite. STEP: Destroying namespace "webhook-8816-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.460 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":280,"skipped":4563,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:44:55.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:44:55.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b" in namespace "downward-api-4624" to be "Succeeded or Failed" Sep 4 14:44:55.830: INFO: Pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.562757ms Sep 4 14:44:57.914: INFO: Pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13593002s Sep 4 14:44:59.918: INFO: Pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139940283s Sep 4 14:45:01.923: INFO: Pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144397146s STEP: Saw pod success Sep 4 14:45:01.923: INFO: Pod "downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b" satisfied condition "Succeeded or Failed" Sep 4 14:45:01.926: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b container client-container: STEP: delete the pod Sep 4 14:45:01.992: INFO: Waiting for pod downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b to disappear Sep 4 14:45:02.038: INFO: Pod downwardapi-volume-509ab0c4-2dbc-494a-8b67-ecc9fedc615b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:45:02.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4624" for this suite. • [SLOW TEST:6.338 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4563,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:45:02.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 4 14:45:02.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da" in namespace "downward-api-5235" to be "Succeeded or Failed" Sep 4 14:45:02.217: INFO: Pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da": Phase="Pending", Reason="", readiness=false. Elapsed: 27.987907ms Sep 4 14:45:04.221: INFO: Pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032126557s Sep 4 14:45:06.228: INFO: Pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da": Phase="Running", Reason="", readiness=true. Elapsed: 4.039205857s Sep 4 14:45:08.232: INFO: Pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043185743s STEP: Saw pod success Sep 4 14:45:08.232: INFO: Pod "downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da" satisfied condition "Succeeded or Failed" Sep 4 14:45:08.235: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da container client-container: STEP: delete the pod Sep 4 14:45:08.295: INFO: Waiting for pod downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da to disappear Sep 4 14:45:08.338: INFO: Pod downwardapi-volume-d279cd2d-2665-4df9-a700-9d8ed44276da no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:45:08.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5235" for this suite. • [SLOW TEST:6.298 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:45:08.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8549 STEP: creating service affinity-nodeport-transition in namespace services-8549 STEP: creating replication controller affinity-nodeport-transition in namespace services-8549 I0904 14:45:08.573449 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8549, replica count: 3 I0904 14:45:11.623817 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:45:14.624067 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0904 14:45:17.624325 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 4 14:45:17.635: INFO: Creating new exec pod Sep 4 14:45:22.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 4 14:45:26.152: INFO: stderr: "I0904 14:45:26.046508 3312 log.go:181] (0xc0002d2210) (0xc000d2a280) Create stream\nI0904 14:45:26.046570 3312 log.go:181] (0xc0002d2210) (0xc000d2a280) Stream added, broadcasting: 1\nI0904 14:45:26.049312 3312 log.go:181] (0xc0002d2210) Reply frame received for 1\nI0904 14:45:26.049373 3312 log.go:181] (0xc0002d2210) (0xc000b15400) Create stream\nI0904 14:45:26.049396 3312 log.go:181] (0xc0002d2210) (0xc000b15400) Stream added, broadcasting: 3\nI0904 14:45:26.050469 3312 log.go:181] (0xc0002d2210) Reply frame received for 3\nI0904 14:45:26.050507 3312 log.go:181] (0xc0002d2210) (0xc000cf6000) Create stream\nI0904 14:45:26.050521 3312 log.go:181] (0xc0002d2210) (0xc000cf6000) Stream added, broadcasting: 5\nI0904 14:45:26.051282 3312 log.go:181] (0xc0002d2210) Reply frame received for 5\nI0904 14:45:26.141056 3312 log.go:181] (0xc0002d2210) Data frame received for 5\nI0904 14:45:26.141090 3312 log.go:181] (0xc000cf6000) (5) Data frame handling\nI0904 14:45:26.141114 3312 log.go:181] (0xc000cf6000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0904 14:45:26.141581 3312 log.go:181] (0xc0002d2210) Data frame received for 5\nI0904 14:45:26.141602 3312 log.go:181] (0xc000cf6000) (5) Data frame handling\nI0904 14:45:26.141619 3312 log.go:181] (0xc000cf6000) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0904 14:45:26.141789 3312 log.go:181] (0xc0002d2210) Data frame received for 5\nI0904 14:45:26.141805 3312 log.go:181] (0xc000cf6000) (5) Data frame handling\nI0904 14:45:26.141820 3312 log.go:181] (0xc0002d2210) Data frame received for 3\nI0904 14:45:26.141827 3312 log.go:181] (0xc000b15400) (3) Data frame handling\nI0904 14:45:26.143349 3312 log.go:181] (0xc0002d2210) Data frame received for 1\nI0904 14:45:26.143374 3312 log.go:181] (0xc000d2a280) (1) Data frame handling\nI0904 14:45:26.143396 3312 log.go:181] (0xc000d2a280) (1) Data frame sent\nI0904 14:45:26.143411 3312 log.go:181] (0xc0002d2210) (0xc000d2a280) Stream removed, broadcasting: 1\nI0904 14:45:26.143427 3312 log.go:181] (0xc0002d2210) Go away received\nI0904 14:45:26.143775 3312 log.go:181] (0xc0002d2210) (0xc000d2a280) Stream removed, broadcasting: 1\nI0904 14:45:26.143792 3312 log.go:181] (0xc0002d2210) (0xc000b15400) Stream removed, broadcasting: 3\nI0904 14:45:26.143799 3312 log.go:181] (0xc0002d2210) (0xc000cf6000) Stream removed, broadcasting: 5\n" Sep 4 14:45:26.152: INFO: stdout: "" Sep 4 14:45:26.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c nc -zv -t -w 2 10.96.255.92 80' Sep 4 14:45:26.406: INFO: stderr: "I0904 14:45:26.301705 3330 log.go:181] (0xc000f07600) (0xc000c88820) Create stream\nI0904 14:45:26.301778 3330 log.go:181] (0xc000f07600) (0xc000c88820) Stream added, broadcasting: 1\nI0904 14:45:26.306352 3330 log.go:181] (0xc000f07600) Reply frame received for 1\nI0904 14:45:26.306385 3330 log.go:181] (0xc000f07600) (0xc000b780a0) Create stream\nI0904 14:45:26.306394 3330 log.go:181] (0xc000f07600) (0xc000b780a0) Stream added, broadcasting: 3\nI0904 14:45:26.307371 3330 log.go:181] (0xc000f07600) Reply frame received for 3\nI0904 14:45:26.307419 3330 log.go:181] (0xc000f07600) (0xc000c88000) Create stream\nI0904 14:45:26.307449 3330 log.go:181] (0xc000f07600) (0xc000c88000) Stream added, broadcasting: 5\nI0904 14:45:26.308240 3330 log.go:181] (0xc000f07600) Reply frame received for 5\nI0904 14:45:26.394632 3330 log.go:181] (0xc000f07600) Data frame received for 5\nI0904 14:45:26.394682 3330 log.go:181] (0xc000c88000) (5) Data frame handling\nI0904 14:45:26.394699 3330 log.go:181] (0xc000c88000) (5) Data frame sent\nI0904 14:45:26.394710 3330 log.go:181] (0xc000f07600) Data frame received for 5\nI0904 14:45:26.394720 3330 log.go:181] (0xc000c88000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.255.92 80\nConnection to 10.96.255.92 80 port [tcp/http] succeeded!\nI0904 14:45:26.394734 3330 log.go:181] (0xc000f07600) Data frame received for 3\nI0904 14:45:26.394774 3330 log.go:181] (0xc000b780a0) (3) Data frame handling\nI0904 14:45:26.395976 3330 log.go:181] (0xc000f07600) Data frame received for 1\nI0904 14:45:26.395993 3330 log.go:181] (0xc000c88820) (1) Data frame handling\nI0904 14:45:26.395999 3330 log.go:181] (0xc000c88820) (1) Data frame sent\nI0904 14:45:26.396006 3330 log.go:181] (0xc000f07600) (0xc000c88820) Stream removed, broadcasting: 1\nI0904 14:45:26.396014 3330 log.go:181] (0xc000f07600) Go away received\nI0904 14:45:26.396403 3330 log.go:181] (0xc000f07600) (0xc000c88820) Stream removed, broadcasting: 1\nI0904 14:45:26.396418 3330 log.go:181] (0xc000f07600) (0xc000b780a0) Stream removed, broadcasting: 3\nI0904 14:45:26.396425 3330 log.go:181] (0xc000f07600) (0xc000c88000) Stream removed, broadcasting: 5\n" Sep 4 14:45:26.406: INFO: stdout: "" Sep 4 14:45:26.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30348' Sep 4 14:45:26.598: INFO: stderr: "I0904 14:45:26.532477 3348 log.go:181] (0xc000018fd0) (0xc0000c3f40) Create stream\nI0904 14:45:26.532543 3348 log.go:181] (0xc000018fd0) (0xc0000c3f40) Stream added, broadcasting: 1\nI0904 14:45:26.535247 3348 log.go:181] (0xc000018fd0) Reply frame received for 1\nI0904 14:45:26.535287 3348 log.go:181] (0xc000018fd0) (0xc00072c3c0) Create stream\nI0904 14:45:26.535300 3348 log.go:181] (0xc000018fd0) (0xc00072c3c0) Stream added, broadcasting: 3\nI0904 14:45:26.536075 3348 log.go:181] (0xc000018fd0) Reply frame received for 3\nI0904 14:45:26.536110 3348 log.go:181] (0xc000018fd0) (0xc00072c460) Create stream\nI0904 14:45:26.536127 3348 log.go:181] (0xc000018fd0) (0xc00072c460) Stream added, broadcasting: 5\nI0904 14:45:26.537013 3348 log.go:181] (0xc000018fd0) Reply frame received for 5\nI0904 14:45:26.584939 3348 log.go:181] (0xc000018fd0) Data frame received for 5\nI0904 14:45:26.584978 3348 log.go:181] (0xc00072c460) (5) Data frame handling\nI0904 14:45:26.584993 3348 log.go:181] (0xc00072c460) (5) Data frame sent\nI0904 14:45:26.585000 3348 log.go:181] (0xc000018fd0) Data frame received for 5\nI0904 14:45:26.585006 3348 log.go:181] (0xc00072c460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30348\nConnection to 172.18.0.11 30348 port [tcp/30348] succeeded!\nI0904 14:45:26.585041 3348 log.go:181] (0xc00072c460) (5) Data frame sent\nI0904 14:45:26.585614 3348 log.go:181] (0xc000018fd0) Data frame received for 3\nI0904 14:45:26.585630 3348 log.go:181] (0xc00072c3c0) (3) Data frame handling\nI0904 14:45:26.585753 3348 log.go:181] (0xc000018fd0) Data frame received for 5\nI0904 14:45:26.585764 3348 log.go:181] (0xc00072c460) (5) Data frame handling\nI0904 14:45:26.587383 3348 log.go:181] (0xc000018fd0) Data frame received for 1\nI0904 14:45:26.587402 3348 log.go:181] (0xc0000c3f40) (1) Data frame handling\nI0904 14:45:26.587415 3348 log.go:181] (0xc0000c3f40) (1) Data frame sent\nI0904 14:45:26.587431 3348 log.go:181] (0xc000018fd0) (0xc0000c3f40) Stream removed, broadcasting: 1\nI0904 14:45:26.587448 3348 log.go:181] (0xc000018fd0) Go away received\nI0904 14:45:26.587817 3348 log.go:181] (0xc000018fd0) (0xc0000c3f40) Stream removed, broadcasting: 1\nI0904 14:45:26.587832 3348 log.go:181] (0xc000018fd0) (0xc00072c3c0) Stream removed, broadcasting: 3\nI0904 14:45:26.587839 3348 log.go:181] (0xc000018fd0) (0xc00072c460) Stream removed, broadcasting: 5\n" Sep 4 14:45:26.598: INFO: stdout: "" Sep 4 14:45:26.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30348' Sep 4 14:45:26.806: INFO: stderr: "I0904 14:45:26.726588 3365 log.go:181] (0xc000d46dc0) (0xc000aaa5a0) Create stream\nI0904 14:45:26.726663 3365 log.go:181] (0xc000d46dc0) (0xc000aaa5a0) Stream added, broadcasting: 1\nI0904 14:45:26.732176 3365 log.go:181] (0xc000d46dc0) Reply frame received for 1\nI0904 14:45:26.732237 3365 log.go:181] (0xc000d46dc0) (0xc000aaa000) Create stream\nI0904 14:45:26.732261 3365 log.go:181] (0xc000d46dc0) (0xc000aaa000) Stream added, broadcasting: 3\nI0904 14:45:26.733377 3365 log.go:181] (0xc000d46dc0) Reply frame received for 3\nI0904 14:45:26.733436 3365 log.go:181] (0xc000d46dc0) (0xc000aaa0a0) Create stream\nI0904 14:45:26.733450 3365 log.go:181] (0xc000d46dc0) (0xc000aaa0a0) Stream added, broadcasting: 5\nI0904 14:45:26.734271 3365 log.go:181] (0xc000d46dc0) Reply frame received for 5\nI0904 14:45:26.797367 3365 log.go:181] (0xc000d46dc0) Data frame received for 5\nI0904 14:45:26.797392 3365 log.go:181] (0xc000aaa0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30348\nConnection to 172.18.0.14 30348 port [tcp/30348] succeeded!\nI0904 14:45:26.797422 3365 log.go:181] (0xc000d46dc0) Data frame received for 3\nI0904 14:45:26.797463 3365 log.go:181] (0xc000aaa000) (3) Data frame handling\nI0904 14:45:26.797508 3365 log.go:181] (0xc000aaa0a0) (5) Data frame sent\nI0904 14:45:26.797534 3365 log.go:181] (0xc000d46dc0) Data frame received for 5\nI0904 14:45:26.797546 3365 log.go:181] (0xc000aaa0a0) (5) Data frame handling\nI0904 14:45:26.798640 3365 log.go:181] (0xc000d46dc0) Data frame received for 1\nI0904 14:45:26.798670 3365 log.go:181] (0xc000aaa5a0) (1) Data frame handling\nI0904 14:45:26.798685 3365 log.go:181] (0xc000aaa5a0) (1) Data frame sent\nI0904 14:45:26.798707 3365 log.go:181] (0xc000d46dc0) (0xc000aaa5a0) Stream removed, broadcasting: 1\nI0904 14:45:26.798748 3365 log.go:181] (0xc000d46dc0) Go away received\nI0904 14:45:26.799144 3365 log.go:181] (0xc000d46dc0) (0xc000aaa5a0) Stream removed, broadcasting: 1\nI0904 14:45:26.799166 3365 log.go:181] (0xc000d46dc0) (0xc000aaa000) Stream removed, broadcasting: 3\nI0904 14:45:26.799177 3365 log.go:181] (0xc000d46dc0) (0xc000aaa0a0) Stream removed, broadcasting: 5\n" Sep 4 14:45:26.806: INFO: stdout: "" Sep 4 14:45:26.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30348/ ; done' Sep 4 14:45:27.174: INFO: stderr: "I0904 14:45:26.999516 3383 log.go:181] (0xc00064bb80) (0xc000642dc0) Create stream\nI0904 14:45:26.999564 3383 log.go:181] (0xc00064bb80) (0xc000642dc0) Stream added, broadcasting: 1\nI0904 14:45:27.001915 3383 log.go:181] (0xc00064bb80) Reply frame received for 1\nI0904 14:45:27.001953 3383 log.go:181] (0xc00064bb80) (0xc000548000) Create stream\nI0904 14:45:27.001975 3383 log.go:181] (0xc00064bb80) (0xc000548000) Stream added, broadcasting: 3\nI0904 14:45:27.002787 3383 log.go:181] (0xc00064bb80) Reply frame received for 3\nI0904 14:45:27.002801 3383 log.go:181] (0xc00064bb80) (0xc000642e60) Create stream\nI0904 14:45:27.002806 3383 log.go:181] (0xc00064bb80) (0xc000642e60) Stream added, broadcasting: 5\nI0904 14:45:27.003852 3383 log.go:181] (0xc00064bb80) Reply frame received for 5\nI0904 14:45:27.068502 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.068532 3383 log.go:181] (0xc000642e60) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.068565 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.068597 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.068609 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.068620 3383 log.go:181] (0xc000642e60) (5) Data frame sent\nI0904 14:45:27.073904 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.073940 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.073955 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.073975 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.073991 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.074003 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.074018 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.074028 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.074040 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.077168 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.077189 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.077204 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.077621 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.077661 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.077675 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.077692 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.077700 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.077709 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.081738 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.081750 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.081756 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.082226 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.082242 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.082249 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.082259 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.082264 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.082269 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.085877 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.085903 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.085920 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.086652 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.086680 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.086695 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.086716 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.086731 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.086751 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.091623 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.091643 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.091653 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.092182 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.092191 3383 log.go:181] (0xc000642e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.092200 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.092230 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.092247 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.092258 3383 log.go:181] (0xc000642e60) (5) Data frame sent\nI0904 14:45:27.097079 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.097096 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.097109 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.097439 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.097454 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.097462 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.097484 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.097508 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.097528 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.101654 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.101675 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.101699 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.102160 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.102177 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.102185 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.102195 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.102204 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.102213 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.110054 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.110088 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.110110 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.110279 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.110290 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.110296 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.110302 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.110307 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.110320 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.114856 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.114876 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.114891 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.115185 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.115205 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.115220 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.115235 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.115246 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.115258 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.120366 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.120379 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.120391 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.120972 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.120996 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.121004 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.121019 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.121025 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.121031 3383 log.go:181] (0xc000642e60) (5) Data frame sent\nI0904 14:45:27.121037 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.121042 3383 log.go:181] (0xc000642e60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.121058 3383 log.go:181] (0xc000642e60) (5) Data frame sent\nI0904 14:45:27.125327 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.125340 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.125346 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.125846 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.125864 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.125879 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.125901 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.125925 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.125944 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.131026 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.131042 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.131059 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.131765 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.131788 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.131797 3383 log.go:181] (0xc000642e60) (5) Data frame sent\nI0904 14:45:27.131807 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.131814 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.131821 3383 log.go:181] (0xc000548000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.137328 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.137347 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.137363 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.137960 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.137975 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.137992 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.138007 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.138024 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.138038 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.143985 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.144003 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.144013 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.145070 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.145101 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.145111 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.145125 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.145133 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.145142 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.149676 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.149696 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.149714 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.150396 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.150432 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.150450 3383 log.go:181] (0xc000642e60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.150500 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.150530 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.150560 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.156831 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.156858 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.156879 3383 log.go:181] (0xc000548000) (3) Data frame sent\nI0904 14:45:27.158072 3383 log.go:181] (0xc00064bb80) Data frame received for 5\nI0904 14:45:27.158090 3383 log.go:181] (0xc000642e60) (5) Data frame handling\nI0904 14:45:27.158296 3383 log.go:181] (0xc00064bb80) Data frame received for 3\nI0904 14:45:27.158316 3383 log.go:181] (0xc000548000) (3) Data frame handling\nI0904 14:45:27.160203 3383 log.go:181] (0xc00064bb80) Data frame received for 1\nI0904 14:45:27.160225 3383 log.go:181] (0xc000642dc0) (1) Data frame handling\nI0904 14:45:27.160242 3383 log.go:181] (0xc000642dc0) (1) Data frame sent\nI0904 14:45:27.160254 3383 log.go:181] (0xc00064bb80) (0xc000642dc0) Stream removed, broadcasting: 1\nI0904 14:45:27.160267 3383 log.go:181] (0xc00064bb80) Go away received\nI0904 14:45:27.160833 3383 log.go:181] (0xc00064bb80) (0xc000642dc0) Stream removed, broadcasting: 1\nI0904 14:45:27.160855 3383 log.go:181] (0xc00064bb80) (0xc000548000) Stream removed, broadcasting: 3\nI0904 14:45:27.160863 3383 log.go:181] (0xc00064bb80) (0xc000642e60) Stream removed, broadcasting: 5\n" Sep 4 14:45:27.175: INFO: stdout: "\naffinity-nodeport-transition-h8q5g\naffinity-nodeport-transition-h8q5g\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-75lzr\naffinity-nodeport-transition-75lzr\naffinity-nodeport-transition-75lzr\naffinity-nodeport-transition-75lzr\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-75lzr\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-75lzr" Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-h8q5g Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-h8q5g Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.175: INFO: Received response from host: affinity-nodeport-transition-75lzr Sep 4 14:45:27.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8549 execpod-affinityhfmkm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30348/ ; done' Sep 4 14:45:27.519: INFO: stderr: "I0904 14:45:27.340203 3395 log.go:181] (0xc000744fd0) (0xc000c5e460) Create stream\nI0904 14:45:27.340240 3395 log.go:181] (0xc000744fd0) (0xc000c5e460) Stream added, broadcasting: 1\nI0904 14:45:27.347929 3395 log.go:181] (0xc000744fd0) Reply frame received for 1\nI0904 14:45:27.347972 3395 log.go:181] (0xc000744fd0) (0xc000c84be0) Create stream\nI0904 14:45:27.347988 3395 log.go:181] (0xc000744fd0) (0xc000c84be0) Stream added, broadcasting: 3\nI0904 14:45:27.350896 3395 log.go:181] (0xc000744fd0) Reply frame received for 3\nI0904 14:45:27.350935 3395 log.go:181] (0xc000744fd0) (0xc0008c6000) Create stream\nI0904 14:45:27.350950 3395 log.go:181] (0xc000744fd0) (0xc0008c6000) Stream added, broadcasting: 5\nI0904 14:45:27.351610 3395 log.go:181] (0xc000744fd0) Reply frame received for 5\nI0904 14:45:27.421664 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.421700 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.421714 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.421735 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.421746 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.421758 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.425178 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.425206 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.425231 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.425546 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.425576 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.425592 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.425605 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.425612 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.425620 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.430214 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.430239 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.430258 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.430539 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.430554 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.430563 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.430571 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.430599 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.430612 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.434835 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.434854 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.434873 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.435336 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.435361 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.435372 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.435389 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.435399 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.435408 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.440072 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.440097 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.440120 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.440319 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.440334 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.440347 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.440374 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.440384 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.440392 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.445245 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.445260 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.445267 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.446012 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.446027 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.446050 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.446064 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.446071 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.446080 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.449745 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.449839 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.449880 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.450351 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.450368 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.450393 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.450412 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.450425 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.450434 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.453648 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.453681 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.453698 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.454096 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.454114 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.454127 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.454233 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.454259 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.454289 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.457645 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.457680 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.457704 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.458445 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.458477 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.458491 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.458507 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.458516 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.458525 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.462129 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.462148 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.462171 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.462690 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.462719 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.462730 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.462752 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.462773 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.462783 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.469657 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.469677 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.469688 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.470505 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.470540 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.470553 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.470571 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.470581 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.470595 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\nI0904 14:45:27.470608 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.470618 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.470639 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\nI0904 14:45:27.474558 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.474577 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.474594 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.475149 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.475190 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.475203 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -sI0904 14:45:27.475225 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.475257 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.475273 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.475299 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.475311 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.475321 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.483163 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.483181 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.483191 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.484031 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.484061 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.484112 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.484153 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.484201 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.484240 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.490169 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.490189 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.490207 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.491045 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.491072 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.491085 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.491116 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.491154 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.491186 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.496698 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.496718 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.496878 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.497993 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.498033 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.498053 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.498083 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.498101 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.498129 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.502285 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.502322 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.502360 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.503370 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.503404 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.503429 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.503448 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.503460 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.503473 3395 log.go:181] (0xc0008c6000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0904 14:45:27.508113 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.508141 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.508159 3395 log.go:181] (0xc000c84be0) (3) Data frame sent\nI0904 14:45:27.509140 3395 log.go:181] (0xc000744fd0) Data frame received for 5\nI0904 14:45:27.509162 3395 log.go:181] (0xc0008c6000) (5) Data frame handling\nI0904 14:45:27.509296 3395 log.go:181] (0xc000744fd0) Data frame received for 3\nI0904 14:45:27.509340 3395 log.go:181] (0xc000c84be0) (3) Data frame handling\nI0904 14:45:27.511240 3395 log.go:181] (0xc000744fd0) Data frame received for 1\nI0904 14:45:27.511259 3395 log.go:181] (0xc000c5e460) (1) Data frame handling\nI0904 14:45:27.511272 3395 log.go:181] (0xc000c5e460) (1) Data frame sent\nI0904 14:45:27.511286 3395 log.go:181] (0xc000744fd0) (0xc000c5e460) Stream removed, broadcasting: 1\nI0904 14:45:27.511303 3395 log.go:181] (0xc000744fd0) Go away received\nI0904 14:45:27.511695 3395 log.go:181] (0xc000744fd0) (0xc000c5e460) Stream removed, broadcasting: 1\nI0904 14:45:27.511711 3395 log.go:181] (0xc000744fd0) (0xc000c84be0) Stream removed, broadcasting: 3\nI0904 14:45:27.511718 3395 log.go:181] (0xc000744fd0) (0xc0008c6000) Stream removed, broadcasting: 5\n" Sep 4 14:45:27.519: INFO: stdout: "\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb\naffinity-nodeport-transition-n72mb" Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Received response from host: affinity-nodeport-transition-n72mb Sep 4 14:45:27.519: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8549, will wait for the garbage collector to delete the pods Sep 4 14:45:27.623: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.770273ms Sep 4 14:45:28.123: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.207372ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:45:40.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8549" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:31.854 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":283,"skipped":4625,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:45:40.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2162 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 4 14:45:40.470: INFO: Found 0 stateful pods, waiting for 3 Sep 4 14:45:50.482: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:45:50.482: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:45:50.482: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 4 14:46:00.476: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:46:00.476: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:46:00.476: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 4 14:46:00.519: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 4 14:46:10.591: INFO: Updating stateful set ss2 Sep 4 14:46:10.632: INFO: Waiting for Pod statefulset-2162/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 4 14:46:21.490: INFO: Found 2 stateful pods, waiting for 3 Sep 4 14:46:31.495: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:46:31.495: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 4 14:46:31.495: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 4 14:46:31.517: INFO: Updating stateful set ss2 Sep 4 14:46:31.574: INFO: Waiting for Pod statefulset-2162/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:46:41.601: INFO: Updating stateful set ss2 Sep 4 14:46:41.952: INFO: Waiting for StatefulSet statefulset-2162/ss2 to complete update Sep 4 14:46:41.952: INFO: Waiting for Pod statefulset-2162/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 4 14:46:51.960: INFO: Waiting for StatefulSet statefulset-2162/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 4 14:47:01.961: INFO: Deleting all statefulset in ns statefulset-2162 Sep 4 14:47:01.963: INFO: Scaling statefulset ss2 to 0 Sep 4 14:47:32.004: INFO: Waiting for statefulset status.replicas updated to 0 Sep 4 14:47:32.006: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:47:32.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2162" for this suite. • [SLOW TEST:111.831 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":284,"skipped":4639,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:47:32.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 4 14:47:32.122: INFO: Waiting up to 5m0s for pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5" in namespace "emptydir-2135" to be "Succeeded or Failed" Sep 4 14:47:32.138: INFO: Pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.837983ms Sep 4 14:47:34.142: INFO: Pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019361669s Sep 4 14:47:36.352: INFO: Pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.229963553s Sep 4 14:47:38.355: INFO: Pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232989896s STEP: Saw pod success Sep 4 14:47:38.355: INFO: Pod "pod-78757189-9200-4b16-83fb-c83c66ace0b5" satisfied condition "Succeeded or Failed" Sep 4 14:47:38.362: INFO: Trying to get logs from node latest-worker2 pod pod-78757189-9200-4b16-83fb-c83c66ace0b5 container test-container: STEP: delete the pod Sep 4 14:47:38.517: INFO: Waiting for pod pod-78757189-9200-4b16-83fb-c83c66ace0b5 to disappear Sep 4 14:47:38.523: INFO: Pod pod-78757189-9200-4b16-83fb-c83c66ace0b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:47:38.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2135" for this suite. • [SLOW TEST:6.515 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4644,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:47:38.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1e320dd2-a529-47ce-9a60-39ed369627f8 STEP: Creating a pod to test consume configMaps Sep 4 14:47:38.758: INFO: Waiting up to 5m0s for pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33" in namespace "configmap-1001" to be "Succeeded or Failed" Sep 4 14:47:38.849: INFO: Pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33": Phase="Pending", Reason="", readiness=false. Elapsed: 91.775123ms Sep 4 14:47:40.982: INFO: Pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224002965s Sep 4 14:47:42.986: INFO: Pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33": Phase="Running", Reason="", readiness=true. Elapsed: 4.228139039s Sep 4 14:47:44.990: INFO: Pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232451721s STEP: Saw pod success Sep 4 14:47:44.990: INFO: Pod "pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33" satisfied condition "Succeeded or Failed" Sep 4 14:47:44.993: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33 container configmap-volume-test: STEP: delete the pod Sep 4 14:47:45.055: INFO: Waiting for pod pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33 to disappear Sep 4 14:47:45.067: INFO: Pod pod-configmaps-54d3d06f-d538-4cb0-b556-54e02fffaa33 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:47:45.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1001" for this suite. • [SLOW TEST:6.525 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4655,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:47:45.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0904 14:47:46.277159 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:48:48.370: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:48:48.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-747" for this suite. • [SLOW TEST:63.302 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":287,"skipped":4662,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:48:48.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 4 14:48:48.491: INFO: Waiting up to 5m0s for pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc" in namespace "emptydir-2769" to be "Succeeded or Failed" Sep 4 14:48:48.519: INFO: Pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.16175ms Sep 4 14:48:50.617: INFO: Pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125726636s Sep 4 14:48:52.621: INFO: Pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc": Phase="Running", Reason="", readiness=true. Elapsed: 4.129894606s Sep 4 14:48:54.625: INFO: Pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133951398s STEP: Saw pod success Sep 4 14:48:54.625: INFO: Pod "pod-c86fa5f3-eb72-419f-b13f-27730a89dacc" satisfied condition "Succeeded or Failed" Sep 4 14:48:54.628: INFO: Trying to get logs from node latest-worker2 pod pod-c86fa5f3-eb72-419f-b13f-27730a89dacc container test-container: STEP: delete the pod Sep 4 14:48:54.844: INFO: Waiting for pod pod-c86fa5f3-eb72-419f-b13f-27730a89dacc to disappear Sep 4 14:48:54.846: INFO: Pod pod-c86fa5f3-eb72-419f-b13f-27730a89dacc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:48:54.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2769" for this suite. • [SLOW TEST:6.480 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4669,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:48:54.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:06.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5223" for this suite. • [SLOW TEST:11.734 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":289,"skipped":4672,"failed":0} SS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:06.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 4 14:49:06.681: INFO: created test-event-1 Sep 4 14:49:06.689: INFO: created test-event-2 Sep 4 14:49:06.693: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 4 14:49:06.699: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 4 14:49:06.761: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:06.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6102" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":290,"skipped":4674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:06.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-qbpw STEP: Creating a pod to test atomic-volume-subpath Sep 4 14:49:06.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qbpw" in namespace "subpath-2382" to be "Succeeded or Failed" Sep 4 14:49:06.959: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.598416ms Sep 4 14:49:08.964: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041373103s Sep 4 14:49:10.969: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046729946s Sep 4 14:49:12.998: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 6.075388916s Sep 4 14:49:15.003: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 8.080418065s Sep 4 14:49:17.006: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 10.084012652s Sep 4 14:49:19.011: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 12.088363706s Sep 4 14:49:21.030: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 14.107794402s Sep 4 14:49:23.034: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 16.111416678s Sep 4 14:49:25.038: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 18.115425513s Sep 4 14:49:27.042: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 20.119234618s Sep 4 14:49:29.054: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 22.131875788s Sep 4 14:49:31.420: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Running", Reason="", readiness=true. Elapsed: 24.497159919s Sep 4 14:49:33.424: INFO: Pod "pod-subpath-test-projected-qbpw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.502017639s STEP: Saw pod success Sep 4 14:49:33.425: INFO: Pod "pod-subpath-test-projected-qbpw" satisfied condition "Succeeded or Failed" Sep 4 14:49:33.427: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-qbpw container test-container-subpath-projected-qbpw: STEP: delete the pod Sep 4 14:49:33.504: INFO: Waiting for pod pod-subpath-test-projected-qbpw to disappear Sep 4 14:49:33.520: INFO: Pod pod-subpath-test-projected-qbpw no longer exists STEP: Deleting pod pod-subpath-test-projected-qbpw Sep 4 14:49:33.520: INFO: Deleting pod "pod-subpath-test-projected-qbpw" in namespace "subpath-2382" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:33.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2382" for this suite. • [SLOW TEST:26.728 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":291,"skipped":4704,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:33.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 4 14:49:38.666: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:39.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2635" for this suite. • [SLOW TEST:6.157 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":292,"skipped":4706,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:39.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:51.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8545" for this suite. • [SLOW TEST:11.583 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":293,"skipped":4714,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:51.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 4 14:49:56.417: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:49:56.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6734" for this suite. • [SLOW TEST:5.269 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:49:56.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 4 14:50:04.527: INFO: 10 pods remaining Sep 4 14:50:04.527: INFO: 1 pods has nil DeletionTimestamp Sep 4 14:50:04.527: INFO: Sep 4 14:50:06.404: INFO: 1 pods remaining Sep 4 14:50:06.404: INFO: 0 pods has nil DeletionTimestamp Sep 4 14:50:06.404: INFO: Sep 4 14:50:08.463: INFO: 0 pods remaining Sep 4 14:50:08.463: INFO: 0 pods has nil DeletionTimestamp Sep 4 14:50:08.463: INFO: STEP: Gathering metrics W0904 14:50:09.875706 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:51:12.319: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:51:12.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-350" for this suite. • [SLOW TEST:75.785 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":295,"skipped":4764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:51:12.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0904 14:51:25.953868 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 4 14:52:28.032: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 4 14:52:28.032: INFO: Deleting pod "simpletest-rc-to-be-deleted-777nm" in namespace "gc-5918" Sep 4 14:52:28.525: INFO: Deleting pod "simpletest-rc-to-be-deleted-9v9mq" in namespace "gc-5918" Sep 4 14:52:28.934: INFO: Deleting pod "simpletest-rc-to-be-deleted-btwwq" in namespace "gc-5918" Sep 4 14:52:29.194: INFO: Deleting pod "simpletest-rc-to-be-deleted-ckvhr" in namespace "gc-5918" Sep 4 14:52:29.907: INFO: Deleting pod "simpletest-rc-to-be-deleted-cxt6j" in namespace "gc-5918" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:52:30.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5918" for this suite. • [SLOW TEST:78.215 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":296,"skipped":4799,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:52:30.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 4 14:52:30.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-131' Sep 4 14:52:31.194: INFO: stderr: "" Sep 4 14:52:31.194: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 4 14:52:31.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-131' Sep 4 14:52:35.647: INFO: stderr: "" Sep 4 14:52:35.647: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:52:35.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-131" for this suite. • [SLOW TEST:5.167 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":297,"skipped":4801,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:52:35.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-f5l4 STEP: Creating a pod to test atomic-volume-subpath Sep 4 14:52:35.845: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f5l4" in namespace "subpath-8812" to be "Succeeded or Failed" Sep 4 14:52:35.849: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739449ms Sep 4 14:52:37.853: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007830764s Sep 4 14:52:39.858: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01220365s Sep 4 14:52:41.862: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 6.016971703s Sep 4 14:52:43.866: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 8.021181582s Sep 4 14:52:45.871: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 10.025852746s Sep 4 14:52:47.876: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 12.030300897s Sep 4 14:52:49.880: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 14.034628921s Sep 4 14:52:51.885: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 16.039310887s Sep 4 14:52:53.889: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 18.043755491s Sep 4 14:52:55.893: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048171516s Sep 4 14:52:57.898: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 22.052876018s Sep 4 14:52:59.903: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Running", Reason="", readiness=true. Elapsed: 24.057253021s Sep 4 14:53:01.907: INFO: Pod "pod-subpath-test-configmap-f5l4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061239426s STEP: Saw pod success Sep 4 14:53:01.907: INFO: Pod "pod-subpath-test-configmap-f5l4" satisfied condition "Succeeded or Failed" Sep 4 14:53:01.909: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-f5l4 container test-container-subpath-configmap-f5l4: STEP: delete the pod Sep 4 14:53:01.954: INFO: Waiting for pod pod-subpath-test-configmap-f5l4 to disappear Sep 4 14:53:01.970: INFO: Pod pod-subpath-test-configmap-f5l4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-f5l4 Sep 4 14:53:01.970: INFO: Deleting pod "pod-subpath-test-configmap-f5l4" in namespace "subpath-8812" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:53:01.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8812" for this suite. • [SLOW TEST:26.271 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":298,"skipped":4811,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:53:01.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-x69m STEP: Creating a pod to test atomic-volume-subpath Sep 4 14:53:02.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-x69m" in namespace "subpath-8080" to be "Succeeded or Failed" Sep 4 14:53:02.086: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.851959ms Sep 4 14:53:04.108: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040463026s Sep 4 14:53:06.113: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 4.045266172s Sep 4 14:53:08.123: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 6.055349428s Sep 4 14:53:10.127: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 8.059600583s Sep 4 14:53:12.132: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 10.064092375s Sep 4 14:53:14.135: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 12.067772722s Sep 4 14:53:16.145: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 14.077145706s Sep 4 14:53:18.170: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 16.102625646s Sep 4 14:53:20.175: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 18.107456756s Sep 4 14:53:22.179: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 20.111792099s Sep 4 14:53:24.184: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 22.11633127s Sep 4 14:53:26.189: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Running", Reason="", readiness=true. Elapsed: 24.12096126s Sep 4 14:53:28.193: INFO: Pod "pod-subpath-test-secret-x69m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.125607689s STEP: Saw pod success Sep 4 14:53:28.193: INFO: Pod "pod-subpath-test-secret-x69m" satisfied condition "Succeeded or Failed" Sep 4 14:53:28.196: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-x69m container test-container-subpath-secret-x69m: STEP: delete the pod Sep 4 14:53:28.257: INFO: Waiting for pod pod-subpath-test-secret-x69m to disappear Sep 4 14:53:28.264: INFO: Pod pod-subpath-test-secret-x69m no longer exists STEP: Deleting pod pod-subpath-test-secret-x69m Sep 4 14:53:28.264: INFO: Deleting pod "pod-subpath-test-secret-x69m" in namespace "subpath-8080" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:53:28.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8080" for this suite. • [SLOW TEST:26.293 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":299,"skipped":4825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:53:28.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 4 14:53:28.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1108' Sep 4 14:53:28.771: INFO: stderr: "" Sep 4 14:53:28.771: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 4 14:53:28.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1108' Sep 4 14:53:28.884: INFO: stderr: "" Sep 4 14:53:28.884: INFO: stdout: "update-demo-nautilus-542sh update-demo-nautilus-mm4d6 " Sep 4 14:53:28.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-542sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1108' Sep 4 14:53:29.059: INFO: stderr: "" Sep 4 14:53:29.059: INFO: stdout: "" Sep 4 14:53:29.059: INFO: update-demo-nautilus-542sh is created but not running Sep 4 14:53:34.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1108' Sep 4 14:53:34.181: INFO: stderr: "" Sep 4 14:53:34.181: INFO: stdout: "update-demo-nautilus-542sh update-demo-nautilus-mm4d6 " Sep 4 14:53:34.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-542sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1108' Sep 4 14:53:34.286: INFO: stderr: "" Sep 4 14:53:34.286: INFO: stdout: "true" Sep 4 14:53:34.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-542sh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1108' Sep 4 14:53:34.405: INFO: stderr: "" Sep 4 14:53:34.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 14:53:34.405: INFO: validating pod update-demo-nautilus-542sh Sep 4 14:53:34.409: INFO: got data: { "image": "nautilus.jpg" } Sep 4 14:53:34.409: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 14:53:34.409: INFO: update-demo-nautilus-542sh is verified up and running Sep 4 14:53:34.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm4d6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1108' Sep 4 14:53:34.524: INFO: stderr: "" Sep 4 14:53:34.524: INFO: stdout: "true" Sep 4 14:53:34.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm4d6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1108' Sep 4 14:53:34.627: INFO: stderr: "" Sep 4 14:53:34.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 4 14:53:34.627: INFO: validating pod update-demo-nautilus-mm4d6 Sep 4 14:53:34.630: INFO: got data: { "image": "nautilus.jpg" } Sep 4 14:53:34.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 4 14:53:34.630: INFO: update-demo-nautilus-mm4d6 is verified up and running STEP: using delete to clean up resources Sep 4 14:53:34.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1108' Sep 4 14:53:34.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 4 14:53:34.756: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 4 14:53:34.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1108' Sep 4 14:53:34.852: INFO: stderr: "No resources found in kubectl-1108 namespace.\n" Sep 4 14:53:34.852: INFO: stdout: "" Sep 4 14:53:34.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1108 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 4 14:53:34.996: INFO: stderr: "" Sep 4 14:53:34.996: INFO: stdout: "update-demo-nautilus-542sh\nupdate-demo-nautilus-mm4d6\n" Sep 4 14:53:35.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1108' Sep 4 14:53:35.627: INFO: stderr: "No resources found in kubectl-1108 namespace.\n" Sep 4 14:53:35.627: INFO: stdout: "" Sep 4 14:53:35.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1108 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 4 14:53:35.733: INFO: stderr: "" Sep 4 14:53:35.733: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:53:35.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1108" for this suite. • [SLOW TEST:7.466 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":300,"skipped":4885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:53:35.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 4 14:53:36.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:36.683: INFO: Number of nodes with available pods: 0 Sep 4 14:53:36.683: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:37.686: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:37.689: INFO: Number of nodes with available pods: 0 Sep 4 14:53:37.689: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:38.688: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:38.721: INFO: Number of nodes with available pods: 0 Sep 4 14:53:38.721: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:40.187: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:40.354: INFO: Number of nodes with available pods: 0 Sep 4 14:53:40.354: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:40.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:40.769: INFO: Number of nodes with available pods: 0 Sep 4 14:53:40.769: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:41.752: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:41.763: INFO: Number of nodes with available pods: 0 Sep 4 14:53:41.763: INFO: Node latest-worker is running more than one daemon pod Sep 4 14:53:42.689: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:42.693: INFO: Number of nodes with available pods: 2 Sep 4 14:53:42.693: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 4 14:53:42.716: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:42.719: INFO: Number of nodes with available pods: 1 Sep 4 14:53:42.719: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:43.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:43.729: INFO: Number of nodes with available pods: 1 Sep 4 14:53:43.729: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:44.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:44.728: INFO: Number of nodes with available pods: 1 Sep 4 14:53:44.728: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:45.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:45.728: INFO: Number of nodes with available pods: 1 Sep 4 14:53:45.728: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:46.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:46.727: INFO: Number of nodes with available pods: 1 Sep 4 14:53:46.727: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:47.723: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:47.726: INFO: Number of nodes with available pods: 1 Sep 4 14:53:47.726: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:48.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:48.728: INFO: Number of nodes with available pods: 1 Sep 4 14:53:48.728: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:49.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:49.787: INFO: Number of nodes with available pods: 1 Sep 4 14:53:49.787: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:50.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:50.728: INFO: Number of nodes with available pods: 1 Sep 4 14:53:50.728: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:51.726: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:51.728: INFO: Number of nodes with available pods: 1 Sep 4 14:53:51.728: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:52.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:52.729: INFO: Number of nodes with available pods: 1 Sep 4 14:53:52.729: INFO: Node latest-worker2 is running more than one daemon pod Sep 4 14:53:53.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 4 14:53:53.728: INFO: Number of nodes with available pods: 2 Sep 4 14:53:53.728: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3610, will wait for the garbage collector to delete the pods Sep 4 14:53:53.791: INFO: Deleting DaemonSet.extensions daemon-set took: 6.700886ms Sep 4 14:53:54.191: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.142512ms Sep 4 14:53:58.195: INFO: Number of nodes with available pods: 0 Sep 4 14:53:58.195: INFO: Number of running nodes: 0, number of available pods: 0 Sep 4 14:53:58.198: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3610/daemonsets","resourceVersion":"6833884"},"items":null} Sep 4 14:53:58.200: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3610/pods","resourceVersion":"6833884"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:53:58.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3610" for this suite. • [SLOW TEST:22.538 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":301,"skipped":4910,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:53:58.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 4 14:53:58.439: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 4 14:53:58.493: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 4 14:54:03.520: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 4 14:54:03.520: INFO: Creating deployment "test-rolling-update-deployment" Sep 4 14:54:03.548: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 4 14:54:03.584: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 4 14:54:05.593: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 4 14:54:05.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734828043, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734828043, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734828043, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734828043, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 4 14:54:07.957: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 4 14:54:07.966: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2693 /apis/apps/v1/namespaces/deployment-2693/deployments/test-rolling-update-deployment 03151921-f7f5-45c5-884b-b00312d1bca6 6833982 1 2020-09-04 14:54:03 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-04 14:54:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 14:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d270b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-04 14:54:03 +0000 UTC,LastTransitionTime:2020-09-04 14:54:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-04 14:54:07 +0000 UTC,LastTransitionTime:2020-09-04 14:54:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 4 14:54:07.969: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-2693 /apis/apps/v1/namespaces/deployment-2693/replicasets/test-rolling-update-deployment-c4cb8d6d9 6b5c1c66-f005-4e4e-92ee-a431685a8666 6833969 1 2020-09-04 14:54:03 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 03151921-f7f5-45c5-884b-b00312d1bca6 0xc004723810 0xc004723811}] [] [{kube-controller-manager Update apps/v1 2020-09-04 14:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"03151921-f7f5-45c5-884b-b00312d1bca6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004723888 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 4 14:54:07.969: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 4 14:54:07.969: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2693 /apis/apps/v1/namespaces/deployment-2693/replicasets/test-rolling-update-controller 630ea1eb-f4b1-4605-aa03-36027ef8ca22 6833981 2 2020-09-04 14:53:58 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 03151921-f7f5-45c5-884b-b00312d1bca6 0xc004723707 0xc004723708}] [] [{e2e.test Update apps/v1 2020-09-04 14:53:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-04 14:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"03151921-f7f5-45c5-884b-b00312d1bca6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0047237a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 4 14:54:07.972: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-mvvj8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-mvvj8 test-rolling-update-deployment-c4cb8d6d9- deployment-2693 /api/v1/namespaces/deployment-2693/pods/test-rolling-update-deployment-c4cb8d6d9-mvvj8 e03b5087-5c98-4bb3-bbfa-e6b85aaec54f 6833968 0 2020-09-04 14:54:03 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 6b5c1c66-f005-4e4e-92ee-a431685a8666 0xc004723d50 0xc004723d51}] [] [{kube-controller-manager Update v1 2020-09-04 14:54:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6b5c1c66-f005-4e4e-92ee-a431685a8666\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-04 14:54:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.110\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wkgwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wkgwj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wkgwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:54:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:54:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:54:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-04 14:54:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.110,StartTime:2020-09-04 14:54:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-04 14:54:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://5e053b86b79f049d5fdc9e4f2b8d02e3b27653c68d69af5fd8220cc38cd3ad9f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:54:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2693" for this suite. • [SLOW TEST:9.699 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":302,"skipped":4922,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 4 14:54:07.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 4 14:54:08.143: INFO: Created pod &Pod{ObjectMeta:{dns-7701 dns-7701 /api/v1/namespaces/dns-7701/pods/dns-7701 53943149-1e74-40d5-9180-bf7b9d702d4a 6833989 0 2020-09-04 14:54:08 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-04 14:54:08 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6sz5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6sz5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6sz5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 4 14:54:08.165: INFO: The status of Pod dns-7701 is Pending, waiting for it to be Running (with Ready = true) Sep 4 14:54:10.172: INFO: The status of Pod dns-7701 is Pending, waiting for it to be Running (with Ready = true) Sep 4 14:54:12.169: INFO: The status of Pod dns-7701 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 4 14:54:12.169: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7701 PodName:dns-7701 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:54:12.169: INFO: >>> kubeConfig: /root/.kube/config I0904 14:54:12.208114 7 log.go:181] (0xc002eae790) (0xc0037ab180) Create stream I0904 14:54:12.208156 7 log.go:181] (0xc002eae790) (0xc0037ab180) Stream added, broadcasting: 1 I0904 14:54:12.210367 7 log.go:181] (0xc002eae790) Reply frame received for 1 I0904 14:54:12.210413 7 log.go:181] (0xc002eae790) (0xc0037ab220) Create stream I0904 14:54:12.210426 7 log.go:181] (0xc002eae790) (0xc0037ab220) Stream added, broadcasting: 3 I0904 14:54:12.211449 7 log.go:181] (0xc002eae790) Reply frame received for 3 I0904 14:54:12.211505 7 log.go:181] (0xc002eae790) (0xc003991180) Create stream I0904 14:54:12.211535 7 log.go:181] (0xc002eae790) (0xc003991180) Stream added, broadcasting: 5 I0904 14:54:12.212364 7 log.go:181] (0xc002eae790) Reply frame received for 5 I0904 14:54:12.323670 7 log.go:181] (0xc002eae790) Data frame received for 3 I0904 14:54:12.323706 7 log.go:181] (0xc0037ab220) (3) Data frame handling I0904 14:54:12.323734 7 log.go:181] (0xc0037ab220) (3) Data frame sent I0904 14:54:12.328645 7 log.go:181] (0xc002eae790) Data frame received for 3 I0904 14:54:12.328687 7 log.go:181] (0xc0037ab220) (3) Data frame handling I0904 14:54:12.328715 7 log.go:181] (0xc002eae790) Data frame received for 5 I0904 14:54:12.328858 7 log.go:181] (0xc003991180) (5) Data frame handling I0904 14:54:12.331062 7 log.go:181] (0xc002eae790) Data frame received for 1 I0904 14:54:12.331081 7 log.go:181] (0xc0037ab180) (1) Data frame handling I0904 14:54:12.331093 7 log.go:181] (0xc0037ab180) (1) Data frame sent I0904 14:54:12.331104 7 log.go:181] (0xc002eae790) (0xc0037ab180) Stream removed, broadcasting: 1 I0904 14:54:12.331119 7 log.go:181] (0xc002eae790) Go away received I0904 14:54:12.331242 7 log.go:181] (0xc002eae790) (0xc0037ab180) Stream removed, broadcasting: 1 I0904 14:54:12.331258 7 log.go:181] (0xc002eae790) (0xc0037ab220) Stream removed, broadcasting: 3 I0904 14:54:12.331264 7 log.go:181] (0xc002eae790) (0xc003991180) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 4 14:54:12.331: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7701 PodName:dns-7701 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 4 14:54:12.331: INFO: >>> kubeConfig: /root/.kube/config I0904 14:54:12.354365 7 log.go:181] (0xc0049da8f0) (0xc002323b80) Create stream I0904 14:54:12.354398 7 log.go:181] (0xc0049da8f0) (0xc002323b80) Stream added, broadcasting: 1 I0904 14:54:12.355920 7 log.go:181] (0xc0049da8f0) Reply frame received for 1 I0904 14:54:12.355957 7 log.go:181] (0xc0049da8f0) (0xc005308000) Create stream I0904 14:54:12.355972 7 log.go:181] (0xc0049da8f0) (0xc005308000) Stream added, broadcasting: 3 I0904 14:54:12.356848 7 log.go:181] (0xc0049da8f0) Reply frame received for 3 I0904 14:54:12.356873 7 log.go:181] (0xc0049da8f0) (0xc0039912c0) Create stream I0904 14:54:12.356880 7 log.go:181] (0xc0049da8f0) (0xc0039912c0) Stream added, broadcasting: 5 I0904 14:54:12.361248 7 log.go:181] (0xc0049da8f0) Reply frame received for 5 I0904 14:54:12.446865 7 log.go:181] (0xc0049da8f0) Data frame received for 3 I0904 14:54:12.446898 7 log.go:181] (0xc005308000) (3) Data frame handling I0904 14:54:12.446925 7 log.go:181] (0xc005308000) (3) Data frame sent I0904 14:54:12.451885 7 log.go:181] (0xc0049da8f0) Data frame received for 3 I0904 14:54:12.451912 7 log.go:181] (0xc005308000) (3) Data frame handling I0904 14:54:12.451975 7 log.go:181] (0xc0049da8f0) Data frame received for 5 I0904 14:54:12.452013 7 log.go:181] (0xc0039912c0) (5) Data frame handling I0904 14:54:12.453473 7 log.go:181] (0xc0049da8f0) Data frame received for 1 I0904 14:54:12.453524 7 log.go:181] (0xc002323b80) (1) Data frame handling I0904 14:54:12.453559 7 log.go:181] (0xc002323b80) (1) Data frame sent I0904 14:54:12.453581 7 log.go:181] (0xc0049da8f0) (0xc002323b80) Stream removed, broadcasting: 1 I0904 14:54:12.453604 7 log.go:181] (0xc0049da8f0) Go away received I0904 14:54:12.453689 7 log.go:181] (0xc0049da8f0) (0xc002323b80) Stream removed, broadcasting: 1 I0904 14:54:12.453712 7 log.go:181] (0xc0049da8f0) (0xc005308000) Stream removed, broadcasting: 3 I0904 14:54:12.453720 7 log.go:181] (0xc0049da8f0) (0xc0039912c0) Stream removed, broadcasting: 5 Sep 4 14:54:12.453: INFO: Deleting pod dns-7701... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 4 14:54:12.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7701" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":303,"skipped":4927,"failed":0} SSSep 4 14:54:12.519: INFO: Running AfterSuite actions on all nodes Sep 4 14:54:12.519: INFO: Running AfterSuite actions on node 1 Sep 4 14:54:12.519: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6635.862 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS