I0123 10:47:26.427617 8 e2e.go:224] Starting e2e run "bdd443d5-3dcd-11ea-bb65-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579776444 - Will randomize all specs Will run 201 of 2164 specs Jan 23 10:47:27.200: INFO: >>> kubeConfig: /root/.kube/config Jan 23 10:47:27.206: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 23 10:47:27.233: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 23 10:47:27.312: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 23 10:47:27.312: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 23 10:47:27.312: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 23 10:47:27.329: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 23 10:47:27.329: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 23 10:47:27.329: INFO: e2e test version: v1.13.12 Jan 23 10:47:27.330: INFO: kube-apiserver version: v1.13.8 [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:47:27.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 23 10:47:27.493: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-bf492b9d-3dcd-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-bf492b9d-3dcd-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:47:42.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hqkp7" for this suite. Jan 23 10:48:08.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:48:08.389: INFO: namespace: e2e-tests-configmap-hqkp7, resource: bindings, ignored listing per whitelist Jan 23 10:48:08.511: INFO: namespace e2e-tests-configmap-hqkp7 deletion completed in 26.493824653s • [SLOW TEST:41.181 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:48:08.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-69j6 STEP: Creating a pod to test atomic-volume-subpath Jan 23 10:48:08.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-69j6" in namespace "e2e-tests-subpath-njpsv" to be "success or failure" Jan 23 10:48:08.919: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 81.939688ms Jan 23 10:48:10.944: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106515986s Jan 23 10:48:12.957: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119770038s Jan 23 10:48:14.978: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140073892s Jan 23 10:48:17.345: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50700582s Jan 23 10:48:19.468: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6305784s Jan 23 10:48:21.771: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.933424899s Jan 23 10:48:23.786: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.948603717s Jan 23 10:48:25.801: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 16.963984195s Jan 23 10:48:27.817: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 18.979792225s Jan 23 10:48:29.840: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 21.002117616s Jan 23 10:48:31.915: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 23.07788103s Jan 23 10:48:33.938: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 25.10062172s Jan 23 10:48:35.960: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 27.122042652s Jan 23 10:48:37.983: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 29.145209577s Jan 23 10:48:40.000: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 31.162862234s Jan 23 10:48:42.080: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Running", Reason="", readiness=false. Elapsed: 33.242981577s Jan 23 10:48:44.098: INFO: Pod "pod-subpath-test-configmap-69j6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.260396128s STEP: Saw pod success Jan 23 10:48:44.098: INFO: Pod "pod-subpath-test-configmap-69j6" satisfied condition "success or failure" Jan 23 10:48:44.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-69j6 container test-container-subpath-configmap-69j6: STEP: delete the pod Jan 23 10:48:44.793: INFO: Waiting for pod pod-subpath-test-configmap-69j6 to disappear Jan 23 10:48:45.254: INFO: Pod pod-subpath-test-configmap-69j6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-69j6 Jan 23 10:48:45.254: INFO: Deleting pod "pod-subpath-test-configmap-69j6" in namespace "e2e-tests-subpath-njpsv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:48:45.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-njpsv" for this suite. Jan 23 10:48:51.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:48:51.447: INFO: namespace: e2e-tests-subpath-njpsv, resource: bindings, ignored listing per whitelist Jan 23 10:48:51.548: INFO: namespace e2e-tests-subpath-njpsv deletion completed in 6.274603539s • [SLOW TEST:43.035 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:48:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 23 10:48:51.728: INFO: Waiting up to 5m0s for pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-kbpcc" to be "success or failure" Jan 23 10:48:51.741: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.237641ms Jan 23 10:48:53.845: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117210234s Jan 23 10:48:55.867: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139417338s Jan 23 10:48:57.938: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210703875s Jan 23 10:49:00.074: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34648423s Jan 23 10:49:02.173: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.444889988s STEP: Saw pod success Jan 23 10:49:02.173: INFO: Pod "pod-f17a5e23-3dcd-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:49:02.189: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f17a5e23-3dcd-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 10:49:02.392: INFO: Waiting for pod pod-f17a5e23-3dcd-11ea-bb65-0242ac110005 to disappear Jan 23 10:49:02.401: INFO: Pod pod-f17a5e23-3dcd-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:49:02.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kbpcc" for this suite. Jan 23 10:49:08.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:49:08.716: INFO: namespace: e2e-tests-emptydir-kbpcc, resource: bindings, ignored listing per whitelist Jan 23 10:49:08.759: INFO: namespace e2e-tests-emptydir-kbpcc deletion completed in 6.353691533s • [SLOW TEST:17.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:49:08.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 10:49:19.114: INFO: Waiting up to 5m0s for pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005" in namespace "e2e-tests-pods-dl99z" to be "success or failure" Jan 23 10:49:19.131: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.821589ms Jan 23 10:49:21.148: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03334288s Jan 23 10:49:23.168: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052944923s Jan 23 10:49:25.640: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525033931s Jan 23 10:49:27.663: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547944061s Jan 23 10:49:29.676: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.560893588s STEP: Saw pod success Jan 23 10:49:29.676: INFO: Pod "client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:49:29.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005 container env3cont: STEP: delete the pod Jan 23 10:49:30.932: INFO: Waiting for pod client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005 to disappear Jan 23 10:49:30.954: INFO: Pod client-envvars-01c93e88-3dce-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:49:30.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dl99z" for this suite. Jan 23 10:50:25.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:50:25.127: INFO: namespace: e2e-tests-pods-dl99z, resource: bindings, ignored listing per whitelist Jan 23 10:50:25.144: INFO: namespace e2e-tests-pods-dl99z deletion completed in 54.176301145s • [SLOW TEST:76.385 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:50:25.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 23 10:50:25.345: INFO: Waiting up to 5m0s for pod "pod-29439c55-3dce-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-twt6k" to be "success or failure" Jan 23 10:50:25.365: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.979277ms Jan 23 10:50:27.376: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031003851s Jan 23 10:50:29.403: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057943572s Jan 23 10:50:31.725: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380270134s Jan 23 10:50:34.141: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.795646602s Jan 23 10:50:36.165: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.820131369s STEP: Saw pod success Jan 23 10:50:36.165: INFO: Pod "pod-29439c55-3dce-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:50:36.316: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-29439c55-3dce-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 10:50:36.398: INFO: Waiting for pod pod-29439c55-3dce-11ea-bb65-0242ac110005 to disappear Jan 23 10:50:36.405: INFO: Pod pod-29439c55-3dce-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:50:36.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-twt6k" for this suite. Jan 23 10:50:42.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:50:42.641: INFO: namespace: e2e-tests-emptydir-twt6k, resource: bindings, ignored listing per whitelist Jan 23 10:50:42.703: INFO: namespace e2e-tests-emptydir-twt6k deletion completed in 6.222179339s • [SLOW TEST:17.558 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:50:42.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xm87l Jan 23 10:50:52.922: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xm87l STEP: checking the pod's current state and verifying that restartCount is present Jan 23 10:50:52.928: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:54:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xm87l" for this suite. Jan 23 10:55:00.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:55:00.314: INFO: namespace: e2e-tests-container-probe-xm87l, resource: bindings, ignored listing per whitelist Jan 23 10:55:00.453: INFO: namespace e2e-tests-container-probe-xm87l deletion completed in 6.313234639s • [SLOW TEST:257.750 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:55:00.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-cd71dfc5-3dce-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 10:55:00.775: INFO: Waiting up to 5m0s for pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-qbgmh" to be "success or failure" Jan 23 10:55:00.787: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.656343ms Jan 23 10:55:02.798: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022455051s Jan 23 10:55:04.817: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041463785s Jan 23 10:55:06.828: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05302732s Jan 23 10:55:08.839: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063364989s Jan 23 10:55:10.859: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083712571s STEP: Saw pod success Jan 23 10:55:10.859: INFO: Pod "pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:55:10.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 10:55:10.972: INFO: Waiting for pod pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005 to disappear Jan 23 10:55:10.978: INFO: Pod pod-secrets-cd72de7f-3dce-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:55:10.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qbgmh" for this suite. Jan 23 10:55:17.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:55:17.126: INFO: namespace: e2e-tests-secrets-qbgmh, resource: bindings, ignored listing per whitelist Jan 23 10:55:17.179: INFO: namespace e2e-tests-secrets-qbgmh deletion completed in 6.19651827s • [SLOW TEST:16.725 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:55:17.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 23 10:55:17.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tm6gx' Jan 23 10:55:20.027: INFO: stderr: "" Jan 23 10:55:20.027: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 23 10:55:21.746: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:21.746: INFO: Found 0 / 1 Jan 23 10:55:22.041: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:22.042: INFO: Found 0 / 1 Jan 23 10:55:23.135: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:23.135: INFO: Found 0 / 1 Jan 23 10:55:24.049: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:24.049: INFO: Found 0 / 1 Jan 23 10:55:25.055: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:25.055: INFO: Found 0 / 1 Jan 23 10:55:26.037: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:26.037: INFO: Found 0 / 1 Jan 23 10:55:27.042: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:27.043: INFO: Found 0 / 1 Jan 23 10:55:28.056: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:28.056: INFO: Found 0 / 1 Jan 23 10:55:29.051: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:29.052: INFO: Found 1 / 1 Jan 23 10:55:29.052: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 23 10:55:29.069: INFO: Selector matched 1 pods for map[app:redis] Jan 23 10:55:29.069: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 23 10:55:29.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx' Jan 23 10:55:29.271: INFO: stderr: "" Jan 23 10:55:29.271: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jan 10:55:27.114 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 10:55:27.114 # Server started, Redis version 3.2.12\n1:M 23 Jan 10:55:27.115 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 10:55:27.115 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 23 10:55:29.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx --tail=1' Jan 23 10:55:29.467: INFO: stderr: "" Jan 23 10:55:29.467: INFO: stdout: "1:M 23 Jan 10:55:27.115 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 23 10:55:29.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx --limit-bytes=1' Jan 23 10:55:29.615: INFO: stderr: "" Jan 23 10:55:29.616: INFO: stdout: " " STEP: exposing timestamps Jan 23 10:55:29.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx --tail=1 --timestamps' Jan 23 10:55:29.811: INFO: stderr: "" Jan 23 10:55:29.811: INFO: stdout: "2020-01-23T10:55:27.11654625Z 1:M 23 Jan 10:55:27.115 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 23 10:55:32.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx --since=1s' Jan 23 10:55:32.625: INFO: stderr: "" Jan 23 10:55:32.625: INFO: stdout: "" Jan 23 10:55:32.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bb79l redis-master --namespace=e2e-tests-kubectl-tm6gx --since=24h' Jan 23 10:55:32.826: INFO: stderr: "" Jan 23 10:55:32.826: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jan 10:55:27.114 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 10:55:27.114 # Server started, Redis version 3.2.12\n1:M 23 Jan 10:55:27.115 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 10:55:27.115 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 23 10:55:32.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tm6gx' Jan 23 10:55:32.946: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 10:55:32.946: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 23 10:55:32.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-tm6gx' Jan 23 10:55:33.213: INFO: stderr: "No resources found.\n" Jan 23 10:55:33.214: INFO: stdout: "" Jan 23 10:55:33.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-tm6gx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 10:55:33.318: INFO: stderr: "" Jan 23 10:55:33.318: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:55:33.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tm6gx" for this suite. Jan 23 10:55:39.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:55:39.486: INFO: namespace: e2e-tests-kubectl-tm6gx, resource: bindings, ignored listing per whitelist Jan 23 10:55:39.571: INFO: namespace e2e-tests-kubectl-tm6gx deletion completed in 6.241489167s • [SLOW TEST:22.391 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:55:39.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 10:55:39.764: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:55:47.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-98xqx" for this suite. Jan 23 10:56:33.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:56:34.026: INFO: namespace: e2e-tests-pods-98xqx, resource: bindings, ignored listing per whitelist Jan 23 10:56:34.164: INFO: namespace e2e-tests-pods-98xqx deletion completed in 46.273892844s • [SLOW TEST:54.594 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:56:34.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-krn84;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-krn84;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-krn84.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 96.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.96_udp@PTR;check="$$(dig +tcp +noall +answer +search 96.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.96_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-krn84;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-krn84;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-krn84.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-krn84.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-krn84.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 96.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.96_udp@PTR;check="$$(dig +tcp +noall +answer +search 96.56.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.56.96_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 23 10:56:51.031: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.037: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.044: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-krn84 from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.053: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-krn84 from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.063: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.071: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.075: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.080: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.084: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.088: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.093: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.097: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-05717198-3dcf-11ea-bb65-0242ac110005) Jan 23 10:56:51.107: INFO: Lookups using e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-krn84 jessie_tcp@dns-test-service.e2e-tests-dns-krn84 jessie_udp@dns-test-service.e2e-tests-dns-krn84.svc jessie_tcp@dns-test-service.e2e-tests-dns-krn84.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-krn84.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-krn84.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 23 10:56:56.227: INFO: DNS probes using e2e-tests-dns-krn84/dns-test-05717198-3dcf-11ea-bb65-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:56:57.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-krn84" for this suite. Jan 23 10:57:04.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:57:04.657: INFO: namespace: e2e-tests-dns-krn84, resource: bindings, ignored listing per whitelist Jan 23 10:57:04.721: INFO: namespace e2e-tests-dns-krn84 deletion completed in 6.582971733s • [SLOW TEST:30.556 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:57:04.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 23 10:57:13.743: INFO: Successfully updated pod "labelsupdate17815718-3dcf-11ea-bb65-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:57:17.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qtcfm" for this suite. Jan 23 10:57:42.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:57:42.074: INFO: namespace: e2e-tests-downward-api-qtcfm, resource: bindings, ignored listing per whitelist Jan 23 10:57:42.182: INFO: namespace e2e-tests-downward-api-qtcfm deletion completed in 24.264853564s • [SLOW TEST:37.460 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:57:42.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 23 10:57:52.888: INFO: Successfully updated pod "pod-update-2dbb7472-3dcf-11ea-bb65-0242ac110005" STEP: verifying the updated pod is in kubernetes Jan 23 10:57:53.012: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:57:53.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dvsjp" for this suite. Jan 23 10:58:17.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:58:17.201: INFO: namespace: e2e-tests-pods-dvsjp, resource: bindings, ignored listing per whitelist Jan 23 10:58:17.230: INFO: namespace e2e-tests-pods-dvsjp deletion completed in 24.21259329s • [SLOW TEST:35.047 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:58:17.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 23 10:58:17.396: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 10:58:17.444: INFO: Waiting for terminating namespaces to be deleted... Jan 23 10:58:17.448: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 23 10:58:17.463: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 23 10:58:17.463: INFO: Container weave ready: true, restart count 0 Jan 23 10:58:17.464: INFO: Container weave-npc ready: true, restart count 0 Jan 23 10:58:17.464: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 10:58:17.464: INFO: Container coredns ready: true, restart count 0 Jan 23 10:58:17.464: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 10:58:17.464: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 10:58:17.464: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 10:58:17.464: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 10:58:17.464: INFO: Container coredns ready: true, restart count 0 Jan 23 10:58:17.464: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 23 10:58:17.464: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 10:58:17.464: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ec7ee670fc30f2], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:58:18.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xp9t9" for this suite. Jan 23 10:58:24.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:58:24.785: INFO: namespace: e2e-tests-sched-pred-xp9t9, resource: bindings, ignored listing per whitelist Jan 23 10:58:24.833: INFO: namespace e2e-tests-sched-pred-xp9t9 deletion completed in 6.280873944s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.603 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:58:24.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 10:58:25.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-qt4rf" to be "success or failure" Jan 23 10:58:25.207: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.285754ms Jan 23 10:58:27.352: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154582768s Jan 23 10:58:29.370: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172060812s Jan 23 10:58:31.767: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569770059s Jan 23 10:58:33.782: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584151707s Jan 23 10:58:37.111: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.913422555s STEP: Saw pod success Jan 23 10:58:37.111: INFO: Pod "downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:58:37.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 10:58:37.772: INFO: Waiting for pod downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005 to disappear Jan 23 10:58:37.791: INFO: Pod downwardapi-volume-473b9140-3dcf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:58:37.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qt4rf" for this suite. Jan 23 10:58:43.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:58:44.203: INFO: namespace: e2e-tests-downward-api-qt4rf, resource: bindings, ignored listing per whitelist Jan 23 10:58:44.220: INFO: namespace e2e-tests-downward-api-qt4rf deletion completed in 6.416526486s • [SLOW TEST:19.387 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:58:44.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 23 10:58:44.520: INFO: Waiting up to 5m0s for pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-tvcrn" to be "success or failure" Jan 23 10:58:44.590: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 69.971746ms Jan 23 10:58:46.647: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127562713s Jan 23 10:58:48.673: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153590945s Jan 23 10:58:50.961: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441142974s Jan 23 10:58:53.168: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.648465011s Jan 23 10:58:55.290: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.770402932s STEP: Saw pod success Jan 23 10:58:55.290: INFO: Pod "downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 10:58:55.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 10:58:55.567: INFO: Waiting for pod downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005 to disappear Jan 23 10:58:55.579: INFO: Pod downward-api-52cca7f7-3dcf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:58:55.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvcrn" for this suite. Jan 23 10:59:01.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 10:59:01.863: INFO: namespace: e2e-tests-downward-api-tvcrn, resource: bindings, ignored listing per whitelist Jan 23 10:59:02.041: INFO: namespace e2e-tests-downward-api-tvcrn deletion completed in 6.380911957s • [SLOW TEST:17.821 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 10:59:02.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v2thz STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 23 10:59:02.268: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 23 10:59:40.418: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-v2thz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 10:59:40.418: INFO: >>> kubeConfig: /root/.kube/config I0123 10:59:40.527615 8 log.go:172] (0xc001b4a370) (0xc001b321e0) Create stream I0123 10:59:40.527721 8 log.go:172] (0xc001b4a370) (0xc001b321e0) Stream added, broadcasting: 1 I0123 10:59:40.536849 8 log.go:172] (0xc001b4a370) Reply frame received for 1 I0123 10:59:40.536949 8 log.go:172] (0xc001b4a370) (0xc001a1c460) Create stream I0123 10:59:40.536997 8 log.go:172] (0xc001b4a370) (0xc001a1c460) Stream added, broadcasting: 3 I0123 10:59:40.539116 8 log.go:172] (0xc001b4a370) Reply frame received for 3 I0123 10:59:40.539291 8 log.go:172] (0xc001b4a370) (0xc001c620a0) Create stream I0123 10:59:40.539315 8 log.go:172] (0xc001b4a370) (0xc001c620a0) Stream added, broadcasting: 5 I0123 10:59:40.541272 8 log.go:172] (0xc001b4a370) Reply frame received for 5 I0123 10:59:41.821201 8 log.go:172] (0xc001b4a370) Data frame received for 3 I0123 10:59:41.821332 8 log.go:172] (0xc001a1c460) (3) Data frame handling I0123 10:59:41.821396 8 log.go:172] (0xc001a1c460) (3) Data frame sent I0123 10:59:42.052665 8 log.go:172] (0xc001b4a370) (0xc001a1c460) Stream removed, broadcasting: 3 I0123 10:59:42.052872 8 log.go:172] (0xc001b4a370) Data frame received for 1 I0123 10:59:42.052881 8 log.go:172] (0xc001b321e0) (1) Data frame handling I0123 10:59:42.052901 8 log.go:172] (0xc001b321e0) (1) Data frame sent I0123 10:59:42.052911 8 log.go:172] (0xc001b4a370) (0xc001b321e0) Stream removed, broadcasting: 1 I0123 10:59:42.053668 8 log.go:172] (0xc001b4a370) (0xc001c620a0) Stream removed, broadcasting: 5 I0123 10:59:42.053893 8 log.go:172] (0xc001b4a370) Go away received I0123 10:59:42.054132 8 log.go:172] (0xc001b4a370) (0xc001b321e0) Stream removed, broadcasting: 1 I0123 10:59:42.054161 8 log.go:172] (0xc001b4a370) (0xc001a1c460) Stream removed, broadcasting: 3 I0123 10:59:42.054169 8 log.go:172] (0xc001b4a370) (0xc001c620a0) Stream removed, broadcasting: 5 Jan 23 10:59:42.054: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 10:59:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-v2thz" for this suite. Jan 23 11:00:08.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:00:08.409: INFO: namespace: e2e-tests-pod-network-test-v2thz, resource: bindings, ignored listing per whitelist Jan 23 11:00:08.492: INFO: namespace e2e-tests-pod-network-test-v2thz deletion completed in 26.419984852s • [SLOW TEST:66.451 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:00:08.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 23 11:00:08.980: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:00:09.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tvgbl" for this suite. Jan 23 11:00:15.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:00:15.269: INFO: namespace: e2e-tests-kubectl-tvgbl, resource: bindings, ignored listing per whitelist Jan 23 11:00:15.305: INFO: namespace e2e-tests-kubectl-tvgbl deletion completed in 6.184022831s • [SLOW TEST:6.812 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:00:15.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:00:15.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-bfrbr" to be "success or failure" Jan 23 11:00:15.500: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.305427ms Jan 23 11:00:17.724: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235421323s Jan 23 11:00:19.737: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248619952s Jan 23 11:00:21.829: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340523335s Jan 23 11:00:23.855: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366527376s Jan 23 11:00:26.234: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.745530887s STEP: Saw pod success Jan 23 11:00:26.234: INFO: Pod "downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:00:26.244: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:00:26.471: INFO: Waiting for pod downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005 to disappear Jan 23 11:00:26.487: INFO: Pod downwardapi-volume-89080db5-3dcf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:00:26.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bfrbr" for this suite. Jan 23 11:00:32.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:00:32.646: INFO: namespace: e2e-tests-projected-bfrbr, resource: bindings, ignored listing per whitelist Jan 23 11:00:32.678: INFO: namespace e2e-tests-projected-bfrbr deletion completed in 6.172874126s • [SLOW TEST:17.373 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:00:32.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:00:33.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-pw4w8" to be "success or failure" Jan 23 11:00:33.106: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.080634ms Jan 23 11:00:35.316: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233998854s Jan 23 11:00:37.325: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243099802s Jan 23 11:00:39.640: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557444628s Jan 23 11:00:41.655: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.572559963s Jan 23 11:00:43.671: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.588303628s STEP: Saw pod success Jan 23 11:00:43.671: INFO: Pod "downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:00:43.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:00:43.936: INFO: Waiting for pod downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005 to disappear Jan 23 11:00:43.960: INFO: Pod downwardapi-volume-936930e8-3dcf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:00:43.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pw4w8" for this suite. Jan 23 11:00:50.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:00:50.246: INFO: namespace: e2e-tests-projected-pw4w8, resource: bindings, ignored listing per whitelist Jan 23 11:00:50.318: INFO: namespace e2e-tests-projected-pw4w8 deletion completed in 6.234486598s • [SLOW TEST:17.640 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:00:50.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 23 11:01:14.836: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:14.836: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:14.927536 8 log.go:172] (0xc0006e5ef0) (0xc000d35ea0) Create stream I0123 11:01:14.927584 8 log.go:172] (0xc0006e5ef0) (0xc000d35ea0) Stream added, broadcasting: 1 I0123 11:01:14.939178 8 log.go:172] (0xc0006e5ef0) Reply frame received for 1 I0123 11:01:14.939262 8 log.go:172] (0xc0006e5ef0) (0xc0017fec80) Create stream I0123 11:01:14.939280 8 log.go:172] (0xc0006e5ef0) (0xc0017fec80) Stream added, broadcasting: 3 I0123 11:01:14.941486 8 log.go:172] (0xc0006e5ef0) Reply frame received for 3 I0123 11:01:14.941570 8 log.go:172] (0xc0006e5ef0) (0xc000d35f40) Create stream I0123 11:01:14.941669 8 log.go:172] (0xc0006e5ef0) (0xc000d35f40) Stream added, broadcasting: 5 I0123 11:01:14.944521 8 log.go:172] (0xc0006e5ef0) Reply frame received for 5 I0123 11:01:15.112059 8 log.go:172] (0xc0006e5ef0) Data frame received for 3 I0123 11:01:15.112138 8 log.go:172] (0xc0017fec80) (3) Data frame handling I0123 11:01:15.112182 8 log.go:172] (0xc0017fec80) (3) Data frame sent I0123 11:01:15.254905 8 log.go:172] (0xc0006e5ef0) (0xc0017fec80) Stream removed, broadcasting: 3 I0123 11:01:15.255090 8 log.go:172] (0xc0006e5ef0) Data frame received for 1 I0123 11:01:15.255141 8 log.go:172] (0xc000d35ea0) (1) Data frame handling I0123 11:01:15.255177 8 log.go:172] (0xc000d35ea0) (1) Data frame sent I0123 11:01:15.255251 8 log.go:172] (0xc0006e5ef0) (0xc000d35f40) Stream removed, broadcasting: 5 I0123 11:01:15.255430 8 log.go:172] (0xc0006e5ef0) (0xc000d35ea0) Stream removed, broadcasting: 1 I0123 11:01:15.255544 8 log.go:172] (0xc0006e5ef0) Go away received I0123 11:01:15.255990 8 log.go:172] (0xc0006e5ef0) (0xc000d35ea0) Stream removed, broadcasting: 1 I0123 11:01:15.256062 8 log.go:172] (0xc0006e5ef0) (0xc0017fec80) Stream removed, broadcasting: 3 I0123 11:01:15.256097 8 log.go:172] (0xc0006e5ef0) (0xc000d35f40) Stream removed, broadcasting: 5 Jan 23 11:01:15.256: INFO: Exec stderr: "" Jan 23 11:01:15.256: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:15.256: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:15.326314 8 log.go:172] (0xc001b4a4d0) (0xc000b6c1e0) Create stream I0123 11:01:15.326370 8 log.go:172] (0xc001b4a4d0) (0xc000b6c1e0) Stream added, broadcasting: 1 I0123 11:01:15.331243 8 log.go:172] (0xc001b4a4d0) Reply frame received for 1 I0123 11:01:15.331281 8 log.go:172] (0xc001b4a4d0) (0xc000b6c280) Create stream I0123 11:01:15.331287 8 log.go:172] (0xc001b4a4d0) (0xc000b6c280) Stream added, broadcasting: 3 I0123 11:01:15.333127 8 log.go:172] (0xc001b4a4d0) Reply frame received for 3 I0123 11:01:15.333151 8 log.go:172] (0xc001b4a4d0) (0xc0017fed20) Create stream I0123 11:01:15.333161 8 log.go:172] (0xc001b4a4d0) (0xc0017fed20) Stream added, broadcasting: 5 I0123 11:01:15.334756 8 log.go:172] (0xc001b4a4d0) Reply frame received for 5 I0123 11:01:15.445820 8 log.go:172] (0xc001b4a4d0) Data frame received for 3 I0123 11:01:15.445900 8 log.go:172] (0xc000b6c280) (3) Data frame handling I0123 11:01:15.445939 8 log.go:172] (0xc000b6c280) (3) Data frame sent I0123 11:01:15.598181 8 log.go:172] (0xc001b4a4d0) Data frame received for 1 I0123 11:01:15.598327 8 log.go:172] (0xc001b4a4d0) (0xc000b6c280) Stream removed, broadcasting: 3 I0123 11:01:15.598393 8 log.go:172] (0xc000b6c1e0) (1) Data frame handling I0123 11:01:15.598425 8 log.go:172] (0xc000b6c1e0) (1) Data frame sent I0123 11:01:15.598448 8 log.go:172] (0xc001b4a4d0) (0xc000b6c1e0) Stream removed, broadcasting: 1 I0123 11:01:15.598646 8 log.go:172] (0xc001b4a4d0) (0xc0017fed20) Stream removed, broadcasting: 5 I0123 11:01:15.598794 8 log.go:172] (0xc001b4a4d0) (0xc000b6c1e0) Stream removed, broadcasting: 1 I0123 11:01:15.598830 8 log.go:172] (0xc001b4a4d0) (0xc000b6c280) Stream removed, broadcasting: 3 I0123 11:01:15.598862 8 log.go:172] (0xc001b4a4d0) (0xc0017fed20) Stream removed, broadcasting: 5 Jan 23 11:01:15.598: INFO: Exec stderr: "" I0123 11:01:15.599012 8 log.go:172] (0xc001b4a4d0) Go away received Jan 23 11:01:15.599: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:15.599: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:15.744549 8 log.go:172] (0xc0008bd6b0) (0xc001b32b40) Create stream I0123 11:01:15.744606 8 log.go:172] (0xc0008bd6b0) (0xc001b32b40) Stream added, broadcasting: 1 I0123 11:01:15.749481 8 log.go:172] (0xc0008bd6b0) Reply frame received for 1 I0123 11:01:15.749512 8 log.go:172] (0xc0008bd6b0) (0xc0017ef0e0) Create stream I0123 11:01:15.749520 8 log.go:172] (0xc0008bd6b0) (0xc0017ef0e0) Stream added, broadcasting: 3 I0123 11:01:15.750375 8 log.go:172] (0xc0008bd6b0) Reply frame received for 3 I0123 11:01:15.750394 8 log.go:172] (0xc0008bd6b0) (0xc0017ef220) Create stream I0123 11:01:15.750400 8 log.go:172] (0xc0008bd6b0) (0xc0017ef220) Stream added, broadcasting: 5 I0123 11:01:15.751257 8 log.go:172] (0xc0008bd6b0) Reply frame received for 5 I0123 11:01:15.866967 8 log.go:172] (0xc0008bd6b0) Data frame received for 3 I0123 11:01:15.867046 8 log.go:172] (0xc0017ef0e0) (3) Data frame handling I0123 11:01:15.867089 8 log.go:172] (0xc0017ef0e0) (3) Data frame sent I0123 11:01:15.999086 8 log.go:172] (0xc0008bd6b0) Data frame received for 1 I0123 11:01:15.999147 8 log.go:172] (0xc0008bd6b0) (0xc0017ef0e0) Stream removed, broadcasting: 3 I0123 11:01:15.999176 8 log.go:172] (0xc001b32b40) (1) Data frame handling I0123 11:01:15.999201 8 log.go:172] (0xc001b32b40) (1) Data frame sent I0123 11:01:15.999240 8 log.go:172] (0xc0008bd6b0) (0xc0017ef220) Stream removed, broadcasting: 5 I0123 11:01:15.999261 8 log.go:172] (0xc0008bd6b0) (0xc001b32b40) Stream removed, broadcasting: 1 I0123 11:01:15.999276 8 log.go:172] (0xc0008bd6b0) Go away received I0123 11:01:15.999589 8 log.go:172] (0xc0008bd6b0) (0xc001b32b40) Stream removed, broadcasting: 1 I0123 11:01:15.999640 8 log.go:172] (0xc0008bd6b0) (0xc0017ef0e0) Stream removed, broadcasting: 3 I0123 11:01:15.999656 8 log.go:172] (0xc0008bd6b0) (0xc0017ef220) Stream removed, broadcasting: 5 Jan 23 11:01:15.999: INFO: Exec stderr: "" Jan 23 11:01:15.999: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:15.999: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:16.097713 8 log.go:172] (0xc0006e58c0) (0xc002112140) Create stream I0123 11:01:16.097893 8 log.go:172] (0xc0006e58c0) (0xc002112140) Stream added, broadcasting: 1 I0123 11:01:16.104111 8 log.go:172] (0xc0006e58c0) Reply frame received for 1 I0123 11:01:16.104178 8 log.go:172] (0xc0006e58c0) (0xc0017be000) Create stream I0123 11:01:16.104194 8 log.go:172] (0xc0006e58c0) (0xc0017be000) Stream added, broadcasting: 3 I0123 11:01:16.105761 8 log.go:172] (0xc0006e58c0) Reply frame received for 3 I0123 11:01:16.105837 8 log.go:172] (0xc0006e58c0) (0xc0021121e0) Create stream I0123 11:01:16.105849 8 log.go:172] (0xc0006e58c0) (0xc0021121e0) Stream added, broadcasting: 5 I0123 11:01:16.107008 8 log.go:172] (0xc0006e58c0) Reply frame received for 5 I0123 11:01:16.291059 8 log.go:172] (0xc0006e58c0) Data frame received for 3 I0123 11:01:16.291109 8 log.go:172] (0xc0017be000) (3) Data frame handling I0123 11:01:16.291127 8 log.go:172] (0xc0017be000) (3) Data frame sent I0123 11:01:16.432437 8 log.go:172] (0xc0006e58c0) (0xc0017be000) Stream removed, broadcasting: 3 I0123 11:01:16.432598 8 log.go:172] (0xc0006e58c0) Data frame received for 1 I0123 11:01:16.432659 8 log.go:172] (0xc002112140) (1) Data frame handling I0123 11:01:16.432747 8 log.go:172] (0xc002112140) (1) Data frame sent I0123 11:01:16.432783 8 log.go:172] (0xc0006e58c0) (0xc002112140) Stream removed, broadcasting: 1 I0123 11:01:16.432869 8 log.go:172] (0xc0006e58c0) (0xc0021121e0) Stream removed, broadcasting: 5 I0123 11:01:16.433025 8 log.go:172] (0xc0006e58c0) Go away received I0123 11:01:16.433453 8 log.go:172] (0xc0006e58c0) (0xc002112140) Stream removed, broadcasting: 1 I0123 11:01:16.433478 8 log.go:172] (0xc0006e58c0) (0xc0017be000) Stream removed, broadcasting: 3 I0123 11:01:16.433492 8 log.go:172] (0xc0006e58c0) (0xc0021121e0) Stream removed, broadcasting: 5 Jan 23 11:01:16.433: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 23 11:01:16.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:16.433: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:16.546796 8 log.go:172] (0xc000dec160) (0xc0015a80a0) Create stream I0123 11:01:16.546918 8 log.go:172] (0xc000dec160) (0xc0015a80a0) Stream added, broadcasting: 1 I0123 11:01:16.557749 8 log.go:172] (0xc000dec160) Reply frame received for 1 I0123 11:01:16.557915 8 log.go:172] (0xc000dec160) (0xc002112460) Create stream I0123 11:01:16.557941 8 log.go:172] (0xc000dec160) (0xc002112460) Stream added, broadcasting: 3 I0123 11:01:16.561071 8 log.go:172] (0xc000dec160) Reply frame received for 3 I0123 11:01:16.561180 8 log.go:172] (0xc000dec160) (0xc000d44000) Create stream I0123 11:01:16.561204 8 log.go:172] (0xc000dec160) (0xc000d44000) Stream added, broadcasting: 5 I0123 11:01:16.563462 8 log.go:172] (0xc000dec160) Reply frame received for 5 I0123 11:01:16.694493 8 log.go:172] (0xc000dec160) Data frame received for 3 I0123 11:01:16.694565 8 log.go:172] (0xc002112460) (3) Data frame handling I0123 11:01:16.694598 8 log.go:172] (0xc002112460) (3) Data frame sent I0123 11:01:16.815060 8 log.go:172] (0xc000dec160) Data frame received for 1 I0123 11:01:16.815153 8 log.go:172] (0xc000dec160) (0xc002112460) Stream removed, broadcasting: 3 I0123 11:01:16.815200 8 log.go:172] (0xc0015a80a0) (1) Data frame handling I0123 11:01:16.815227 8 log.go:172] (0xc0015a80a0) (1) Data frame sent I0123 11:01:16.815255 8 log.go:172] (0xc000dec160) (0xc000d44000) Stream removed, broadcasting: 5 I0123 11:01:16.815280 8 log.go:172] (0xc000dec160) (0xc0015a80a0) Stream removed, broadcasting: 1 I0123 11:01:16.815299 8 log.go:172] (0xc000dec160) Go away received I0123 11:01:16.815449 8 log.go:172] (0xc000dec160) (0xc0015a80a0) Stream removed, broadcasting: 1 I0123 11:01:16.815467 8 log.go:172] (0xc000dec160) (0xc002112460) Stream removed, broadcasting: 3 I0123 11:01:16.815477 8 log.go:172] (0xc000dec160) (0xc000d44000) Stream removed, broadcasting: 5 Jan 23 11:01:16.815: INFO: Exec stderr: "" Jan 23 11:01:16.815: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:16.815: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:16.875254 8 log.go:172] (0xc001b4a370) (0xc000d44280) Create stream I0123 11:01:16.875359 8 log.go:172] (0xc001b4a370) (0xc000d44280) Stream added, broadcasting: 1 I0123 11:01:16.881081 8 log.go:172] (0xc001b4a370) Reply frame received for 1 I0123 11:01:16.881108 8 log.go:172] (0xc001b4a370) (0xc002112500) Create stream I0123 11:01:16.881140 8 log.go:172] (0xc001b4a370) (0xc002112500) Stream added, broadcasting: 3 I0123 11:01:16.881845 8 log.go:172] (0xc001b4a370) Reply frame received for 3 I0123 11:01:16.881874 8 log.go:172] (0xc001b4a370) (0xc0015a8140) Create stream I0123 11:01:16.881883 8 log.go:172] (0xc001b4a370) (0xc0015a8140) Stream added, broadcasting: 5 I0123 11:01:16.882637 8 log.go:172] (0xc001b4a370) Reply frame received for 5 I0123 11:01:17.003094 8 log.go:172] (0xc001b4a370) Data frame received for 3 I0123 11:01:17.003153 8 log.go:172] (0xc002112500) (3) Data frame handling I0123 11:01:17.003167 8 log.go:172] (0xc002112500) (3) Data frame sent I0123 11:01:17.129478 8 log.go:172] (0xc001b4a370) Data frame received for 1 I0123 11:01:17.129559 8 log.go:172] (0xc000d44280) (1) Data frame handling I0123 11:01:17.129578 8 log.go:172] (0xc000d44280) (1) Data frame sent I0123 11:01:17.129842 8 log.go:172] (0xc001b4a370) (0xc000d44280) Stream removed, broadcasting: 1 I0123 11:01:17.129902 8 log.go:172] (0xc001b4a370) (0xc002112500) Stream removed, broadcasting: 3 I0123 11:01:17.130254 8 log.go:172] (0xc001b4a370) (0xc0015a8140) Stream removed, broadcasting: 5 I0123 11:01:17.130310 8 log.go:172] (0xc001b4a370) (0xc000d44280) Stream removed, broadcasting: 1 I0123 11:01:17.130319 8 log.go:172] (0xc001b4a370) (0xc002112500) Stream removed, broadcasting: 3 I0123 11:01:17.130327 8 log.go:172] (0xc001b4a370) (0xc0015a8140) Stream removed, broadcasting: 5 I0123 11:01:17.130576 8 log.go:172] (0xc001b4a370) Go away received Jan 23 11:01:17.130: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 23 11:01:17.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:17.130: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:17.225602 8 log.go:172] (0xc001c52370) (0xc00168e140) Create stream I0123 11:01:17.225773 8 log.go:172] (0xc001c52370) (0xc00168e140) Stream added, broadcasting: 1 I0123 11:01:17.237494 8 log.go:172] (0xc001c52370) Reply frame received for 1 I0123 11:01:17.237563 8 log.go:172] (0xc001c52370) (0xc00168e1e0) Create stream I0123 11:01:17.237576 8 log.go:172] (0xc001c52370) (0xc00168e1e0) Stream added, broadcasting: 3 I0123 11:01:17.238886 8 log.go:172] (0xc001c52370) Reply frame received for 3 I0123 11:01:17.238963 8 log.go:172] (0xc001c52370) (0xc000d98000) Create stream I0123 11:01:17.238983 8 log.go:172] (0xc001c52370) (0xc000d98000) Stream added, broadcasting: 5 I0123 11:01:17.240227 8 log.go:172] (0xc001c52370) Reply frame received for 5 I0123 11:01:17.435467 8 log.go:172] (0xc001c52370) Data frame received for 3 I0123 11:01:17.435512 8 log.go:172] (0xc00168e1e0) (3) Data frame handling I0123 11:01:17.435530 8 log.go:172] (0xc00168e1e0) (3) Data frame sent I0123 11:01:17.551174 8 log.go:172] (0xc001c52370) Data frame received for 1 I0123 11:01:17.551286 8 log.go:172] (0xc00168e140) (1) Data frame handling I0123 11:01:17.551315 8 log.go:172] (0xc00168e140) (1) Data frame sent I0123 11:01:17.551338 8 log.go:172] (0xc001c52370) (0xc00168e140) Stream removed, broadcasting: 1 I0123 11:01:17.552917 8 log.go:172] (0xc001c52370) (0xc00168e1e0) Stream removed, broadcasting: 3 I0123 11:01:17.553052 8 log.go:172] (0xc001c52370) (0xc000d98000) Stream removed, broadcasting: 5 I0123 11:01:17.553180 8 log.go:172] (0xc001c52370) (0xc00168e140) Stream removed, broadcasting: 1 I0123 11:01:17.553189 8 log.go:172] (0xc001c52370) (0xc00168e1e0) Stream removed, broadcasting: 3 I0123 11:01:17.553195 8 log.go:172] (0xc001c52370) (0xc000d98000) Stream removed, broadcasting: 5 Jan 23 11:01:17.553: INFO: Exec stderr: "" Jan 23 11:01:17.553: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:17.553: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:17.554748 8 log.go:172] (0xc001c52370) Go away received I0123 11:01:17.636441 8 log.go:172] (0xc001546160) (0xc0021126e0) Create stream I0123 11:01:17.636474 8 log.go:172] (0xc001546160) (0xc0021126e0) Stream added, broadcasting: 1 I0123 11:01:17.640362 8 log.go:172] (0xc001546160) Reply frame received for 1 I0123 11:01:17.640405 8 log.go:172] (0xc001546160) (0xc00168e280) Create stream I0123 11:01:17.640423 8 log.go:172] (0xc001546160) (0xc00168e280) Stream added, broadcasting: 3 I0123 11:01:17.641177 8 log.go:172] (0xc001546160) Reply frame received for 3 I0123 11:01:17.641198 8 log.go:172] (0xc001546160) (0xc000d44320) Create stream I0123 11:01:17.641208 8 log.go:172] (0xc001546160) (0xc000d44320) Stream added, broadcasting: 5 I0123 11:01:17.641975 8 log.go:172] (0xc001546160) Reply frame received for 5 I0123 11:01:17.741975 8 log.go:172] (0xc001546160) Data frame received for 3 I0123 11:01:17.742017 8 log.go:172] (0xc00168e280) (3) Data frame handling I0123 11:01:17.742036 8 log.go:172] (0xc00168e280) (3) Data frame sent I0123 11:01:17.862853 8 log.go:172] (0xc001546160) (0xc00168e280) Stream removed, broadcasting: 3 I0123 11:01:17.863100 8 log.go:172] (0xc001546160) Data frame received for 1 I0123 11:01:17.863140 8 log.go:172] (0xc0021126e0) (1) Data frame handling I0123 11:01:17.863168 8 log.go:172] (0xc001546160) (0xc000d44320) Stream removed, broadcasting: 5 I0123 11:01:17.863248 8 log.go:172] (0xc0021126e0) (1) Data frame sent I0123 11:01:17.863284 8 log.go:172] (0xc001546160) (0xc0021126e0) Stream removed, broadcasting: 1 I0123 11:01:17.863322 8 log.go:172] (0xc001546160) Go away received I0123 11:01:17.863619 8 log.go:172] (0xc001546160) (0xc0021126e0) Stream removed, broadcasting: 1 I0123 11:01:17.863644 8 log.go:172] (0xc001546160) (0xc00168e280) Stream removed, broadcasting: 3 I0123 11:01:17.863659 8 log.go:172] (0xc001546160) (0xc000d44320) Stream removed, broadcasting: 5 Jan 23 11:01:17.863: INFO: Exec stderr: "" Jan 23 11:01:17.863: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:17.863: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:17.935118 8 log.go:172] (0xc0008bd760) (0xc0017be280) Create stream I0123 11:01:17.935171 8 log.go:172] (0xc0008bd760) (0xc0017be280) Stream added, broadcasting: 1 I0123 11:01:17.939440 8 log.go:172] (0xc0008bd760) Reply frame received for 1 I0123 11:01:17.939507 8 log.go:172] (0xc0008bd760) (0xc000d98140) Create stream I0123 11:01:17.939516 8 log.go:172] (0xc0008bd760) (0xc000d98140) Stream added, broadcasting: 3 I0123 11:01:17.941018 8 log.go:172] (0xc0008bd760) Reply frame received for 3 I0123 11:01:17.941042 8 log.go:172] (0xc0008bd760) (0xc00168e320) Create stream I0123 11:01:17.941053 8 log.go:172] (0xc0008bd760) (0xc00168e320) Stream added, broadcasting: 5 I0123 11:01:17.942952 8 log.go:172] (0xc0008bd760) Reply frame received for 5 I0123 11:01:18.037912 8 log.go:172] (0xc0008bd760) Data frame received for 3 I0123 11:01:18.037989 8 log.go:172] (0xc000d98140) (3) Data frame handling I0123 11:01:18.038019 8 log.go:172] (0xc000d98140) (3) Data frame sent I0123 11:01:18.147339 8 log.go:172] (0xc0008bd760) (0xc000d98140) Stream removed, broadcasting: 3 I0123 11:01:18.147509 8 log.go:172] (0xc0008bd760) Data frame received for 1 I0123 11:01:18.147553 8 log.go:172] (0xc0017be280) (1) Data frame handling I0123 11:01:18.147594 8 log.go:172] (0xc0017be280) (1) Data frame sent I0123 11:01:18.147623 8 log.go:172] (0xc0008bd760) (0xc00168e320) Stream removed, broadcasting: 5 I0123 11:01:18.147662 8 log.go:172] (0xc0008bd760) (0xc0017be280) Stream removed, broadcasting: 1 I0123 11:01:18.147689 8 log.go:172] (0xc0008bd760) Go away received I0123 11:01:18.148176 8 log.go:172] (0xc0008bd760) (0xc0017be280) Stream removed, broadcasting: 1 I0123 11:01:18.148221 8 log.go:172] (0xc0008bd760) (0xc000d98140) Stream removed, broadcasting: 3 I0123 11:01:18.148279 8 log.go:172] (0xc0008bd760) (0xc00168e320) Stream removed, broadcasting: 5 Jan 23 11:01:18.148: INFO: Exec stderr: "" Jan 23 11:01:18.148: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7kn98 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 11:01:18.148: INFO: >>> kubeConfig: /root/.kube/config I0123 11:01:18.214109 8 log.go:172] (0xc001c52840) (0xc00168e5a0) Create stream I0123 11:01:18.214300 8 log.go:172] (0xc001c52840) (0xc00168e5a0) Stream added, broadcasting: 1 I0123 11:01:18.220527 8 log.go:172] (0xc001c52840) Reply frame received for 1 I0123 11:01:18.220576 8 log.go:172] (0xc001c52840) (0xc000d981e0) Create stream I0123 11:01:18.220588 8 log.go:172] (0xc001c52840) (0xc000d981e0) Stream added, broadcasting: 3 I0123 11:01:18.221728 8 log.go:172] (0xc001c52840) Reply frame received for 3 I0123 11:01:18.221745 8 log.go:172] (0xc001c52840) (0xc00168e640) Create stream I0123 11:01:18.221749 8 log.go:172] (0xc001c52840) (0xc00168e640) Stream added, broadcasting: 5 I0123 11:01:18.222700 8 log.go:172] (0xc001c52840) Reply frame received for 5 I0123 11:01:18.337778 8 log.go:172] (0xc001c52840) Data frame received for 3 I0123 11:01:18.338070 8 log.go:172] (0xc000d981e0) (3) Data frame handling I0123 11:01:18.338170 8 log.go:172] (0xc000d981e0) (3) Data frame sent I0123 11:01:18.483249 8 log.go:172] (0xc001c52840) Data frame received for 1 I0123 11:01:18.483716 8 log.go:172] (0xc00168e5a0) (1) Data frame handling I0123 11:01:18.483858 8 log.go:172] (0xc00168e5a0) (1) Data frame sent I0123 11:01:18.484009 8 log.go:172] (0xc001c52840) (0xc00168e5a0) Stream removed, broadcasting: 1 I0123 11:01:18.485117 8 log.go:172] (0xc001c52840) (0xc000d981e0) Stream removed, broadcasting: 3 I0123 11:01:18.485295 8 log.go:172] (0xc001c52840) (0xc00168e640) Stream removed, broadcasting: 5 I0123 11:01:18.485331 8 log.go:172] (0xc001c52840) Go away received I0123 11:01:18.485440 8 log.go:172] (0xc001c52840) (0xc00168e5a0) Stream removed, broadcasting: 1 I0123 11:01:18.485475 8 log.go:172] (0xc001c52840) (0xc000d981e0) Stream removed, broadcasting: 3 I0123 11:01:18.485500 8 log.go:172] (0xc001c52840) (0xc00168e640) Stream removed, broadcasting: 5 Jan 23 11:01:18.485: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:01:18.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7kn98" for this suite. Jan 23 11:02:14.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:02:14.747: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7kn98, resource: bindings, ignored listing per whitelist Jan 23 11:02:14.762: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7kn98 deletion completed in 56.253977004s • [SLOW TEST:84.444 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:02:14.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0123 11:02:25.417973 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 11:02:25.418: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:02:25.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nhd2c" for this suite. Jan 23 11:02:31.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:02:31.854: INFO: namespace: e2e-tests-gc-nhd2c, resource: bindings, ignored listing per whitelist Jan 23 11:02:31.934: INFO: namespace e2e-tests-gc-nhd2c deletion completed in 6.504621411s • [SLOW TEST:17.171 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:02:31.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0123 11:03:14.652581 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 11:03:14.652: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:03:14.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8svmn" for this suite. Jan 23 11:03:24.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:03:24.989: INFO: namespace: e2e-tests-gc-8svmn, resource: bindings, ignored listing per whitelist Jan 23 11:03:25.215: INFO: namespace e2e-tests-gc-8svmn deletion completed in 10.556625188s • [SLOW TEST:53.281 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:03:25.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 23 11:03:26.211: INFO: Waiting up to 5m0s for pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-d8nln" to be "success or failure" Jan 23 11:03:26.449: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 238.190833ms Jan 23 11:03:29.068: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.856576688s Jan 23 11:03:31.092: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.881182894s Jan 23 11:03:33.099: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.887548244s Jan 23 11:03:35.114: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.902861873s Jan 23 11:03:37.139: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.927892737s Jan 23 11:03:39.150: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.938448851s Jan 23 11:03:41.186: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.974472595s Jan 23 11:03:43.207: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.996043966s Jan 23 11:03:45.223: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.0119166s STEP: Saw pod success Jan 23 11:03:45.223: INFO: Pod "pod-fab39e4e-3dcf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:03:45.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fab39e4e-3dcf-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:03:45.361: INFO: Waiting for pod pod-fab39e4e-3dcf-11ea-bb65-0242ac110005 to disappear Jan 23 11:03:45.588: INFO: Pod pod-fab39e4e-3dcf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:03:45.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d8nln" for this suite. Jan 23 11:03:51.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:03:52.009: INFO: namespace: e2e-tests-emptydir-d8nln, resource: bindings, ignored listing per whitelist Jan 23 11:03:52.009: INFO: namespace e2e-tests-emptydir-d8nln deletion completed in 6.385081475s • [SLOW TEST:26.793 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:03:52.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 23 11:03:52.156: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 11:03:52.219: INFO: Waiting for terminating namespaces to be deleted... Jan 23 11:03:52.225: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 23 11:03:52.248: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 11:03:52.248: INFO: Container coredns ready: true, restart count 0 Jan 23 11:03:52.248: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:03:52.248: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:03:52.248: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:03:52.248: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 11:03:52.248: INFO: Container coredns ready: true, restart count 0 Jan 23 11:03:52.248: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 23 11:03:52.248: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 11:03:52.248: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:03:52.248: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 23 11:03:52.248: INFO: Container weave ready: true, restart count 0 Jan 23 11:03:52.248: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1044f92f-3dd0-11ea-bb65-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1044f92f-3dd0-11ea-bb65-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-1044f92f-3dd0-11ea-bb65-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:04:14.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-52mbq" for this suite. Jan 23 11:04:34.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:04:34.911: INFO: namespace: e2e-tests-sched-pred-52mbq, resource: bindings, ignored listing per whitelist Jan 23 11:04:34.936: INFO: namespace e2e-tests-sched-pred-52mbq deletion completed in 20.157436913s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:42.925 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:04:34.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-23c6436e-3dd0-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:04:49.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t5s42" for this suite. Jan 23 11:05:13.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:05:13.385: INFO: namespace: e2e-tests-configmap-t5s42, resource: bindings, ignored listing per whitelist Jan 23 11:05:13.503: INFO: namespace e2e-tests-configmap-t5s42 deletion completed in 24.250452556s • [SLOW TEST:38.567 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:05:13.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jan 23 11:05:13.708: INFO: Waiting up to 5m0s for pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-containers-j7hd8" to be "success or failure" Jan 23 11:05:13.717: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810079ms Jan 23 11:05:15.733: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025109303s Jan 23 11:05:17.759: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050208988s Jan 23 11:05:19.775: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066442547s Jan 23 11:05:22.069: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360389537s Jan 23 11:05:24.103: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.394490895s STEP: Saw pod success Jan 23 11:05:24.103: INFO: Pod "client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:05:24.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:05:24.208: INFO: Waiting for pod client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:05:24.254: INFO: Pod client-containers-3ac911ce-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:05:24.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-j7hd8" for this suite. Jan 23 11:05:30.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:05:30.524: INFO: namespace: e2e-tests-containers-j7hd8, resource: bindings, ignored listing per whitelist Jan 23 11:05:30.633: INFO: namespace e2e-tests-containers-j7hd8 deletion completed in 6.369538096s • [SLOW TEST:17.130 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:05:30.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-44f93864-3dd0-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:05:30.816: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-z8545" to be "success or failure" Jan 23 11:05:30.831: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287185ms Jan 23 11:05:32.952: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13538868s Jan 23 11:05:34.990: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173724445s Jan 23 11:05:37.116: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300029903s Jan 23 11:05:39.134: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317935011s Jan 23 11:05:41.147: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33029955s STEP: Saw pod success Jan 23 11:05:41.147: INFO: Pod "pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:05:41.151: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 23 11:05:41.233: INFO: Waiting for pod pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:05:41.350: INFO: Pod pod-projected-secrets-44fa257a-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:05:41.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8545" for this suite. Jan 23 11:05:47.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:05:47.748: INFO: namespace: e2e-tests-projected-z8545, resource: bindings, ignored listing per whitelist Jan 23 11:05:47.793: INFO: namespace e2e-tests-projected-z8545 deletion completed in 6.424220857s • [SLOW TEST:17.159 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:05:47.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:05:48.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fcmnq" for this suite. Jan 23 11:05:54.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:05:54.526: INFO: namespace: e2e-tests-kubelet-test-fcmnq, resource: bindings, ignored listing per whitelist Jan 23 11:05:54.617: INFO: namespace e2e-tests-kubelet-test-fcmnq deletion completed in 6.224462628s • [SLOW TEST:6.823 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:05:54.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 23 11:05:54.870: INFO: Waiting up to 5m0s for pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-var-expansion-2hsks" to be "success or failure" Jan 23 11:05:54.877: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.899935ms Jan 23 11:05:56.901: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030677201s Jan 23 11:05:58.993: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122855572s Jan 23 11:06:01.027: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156760155s Jan 23 11:06:03.042: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172057685s Jan 23 11:06:05.055: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185154117s Jan 23 11:06:07.073: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.203354259s STEP: Saw pod success Jan 23 11:06:07.073: INFO: Pod "var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:06:07.080: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 11:06:07.287: INFO: Waiting for pod var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:06:07.299: INFO: Pod var-expansion-534e9ca9-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:06:07.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-2hsks" for this suite. Jan 23 11:06:13.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:06:13.607: INFO: namespace: e2e-tests-var-expansion-2hsks, resource: bindings, ignored listing per whitelist Jan 23 11:06:13.657: INFO: namespace e2e-tests-var-expansion-2hsks deletion completed in 6.348525555s • [SLOW TEST:19.040 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:06:13.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 23 11:06:13.898: INFO: PodSpec: initContainers in spec.initContainers Jan 23 11:07:22.292: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5eab2b2f-3dd0-11ea-bb65-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-gt4mk", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-gt4mk/pods/pod-init-5eab2b2f-3dd0-11ea-bb65-0242ac110005", UID:"5ead3297-3dd0-11ea-a994-fa163e34d433", ResourceVersion:"19176808", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715374373, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"898943355"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lnh72", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001293900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnh72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnh72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnh72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a5d578), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0007f3500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a5d5f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a5d610)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a5d618), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a5d61c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715374374, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715374374, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715374374, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715374373, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00165ed80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00067ed90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00067ee00)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://42cd0e241a9d4ca458dfee264ee42195a9e62830651481f0a1272d59deade7bc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00165edc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00165eda0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:07:22.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-gt4mk" for this suite. Jan 23 11:07:46.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:07:46.555: INFO: namespace: e2e-tests-init-container-gt4mk, resource: bindings, ignored listing per whitelist Jan 23 11:07:46.795: INFO: namespace e2e-tests-init-container-gt4mk deletion completed in 24.476681196s • [SLOW TEST:93.138 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:07:46.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:07:46.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 23 11:07:47.016: INFO: stderr: "" Jan 23 11:07:47.016: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 23 11:07:47.022: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:07:47.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9gzxm" for this suite. Jan 23 11:07:53.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:07:53.210: INFO: namespace: e2e-tests-kubectl-9gzxm, resource: bindings, ignored listing per whitelist Jan 23 11:07:53.216: INFO: namespace e2e-tests-kubectl-9gzxm deletion completed in 6.184250188s S [SKIPPING] [6.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:07:47.022: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:07:53.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:08:03.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-lv2qf" for this suite. Jan 23 11:08:57.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:08:58.033: INFO: namespace: e2e-tests-kubelet-test-lv2qf, resource: bindings, ignored listing per whitelist Jan 23 11:08:58.125: INFO: namespace e2e-tests-kubelet-test-lv2qf deletion completed in 54.247417164s • [SLOW TEST:64.909 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:08:58.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 23 11:08:58.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bcxvr run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 23 11:09:11.215: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0123 11:09:09.557365 302 log.go:172] (0xc000138790) (0xc000864140) Create stream\nI0123 11:09:09.557432 302 log.go:172] (0xc000138790) (0xc000864140) Stream added, broadcasting: 1\nI0123 11:09:09.570266 302 log.go:172] (0xc000138790) Reply frame received for 1\nI0123 11:09:09.570409 302 log.go:172] (0xc000138790) (0xc000595e00) Create stream\nI0123 11:09:09.570444 302 log.go:172] (0xc000138790) (0xc000595e00) Stream added, broadcasting: 3\nI0123 11:09:09.576858 302 log.go:172] (0xc000138790) Reply frame received for 3\nI0123 11:09:09.577298 302 log.go:172] (0xc000138790) (0xc000884000) Create stream\nI0123 11:09:09.577327 302 log.go:172] (0xc000138790) (0xc000884000) Stream added, broadcasting: 5\nI0123 11:09:09.581149 302 log.go:172] (0xc000138790) Reply frame received for 5\nI0123 11:09:09.581248 302 log.go:172] (0xc000138790) (0xc0008641e0) Create stream\nI0123 11:09:09.581277 302 log.go:172] (0xc000138790) (0xc0008641e0) Stream added, broadcasting: 7\nI0123 11:09:09.584205 302 log.go:172] (0xc000138790) Reply frame received for 7\nI0123 11:09:09.584932 302 log.go:172] (0xc000595e00) (3) Writing data frame\nI0123 11:09:09.585328 302 log.go:172] (0xc000595e00) (3) Writing data frame\nI0123 11:09:09.604763 302 log.go:172] (0xc000138790) Data frame received for 5\nI0123 11:09:09.604862 302 log.go:172] (0xc000884000) (5) Data frame handling\nI0123 11:09:09.604948 302 log.go:172] (0xc000884000) (5) Data frame sent\nI0123 11:09:09.624132 302 log.go:172] (0xc000138790) Data frame received for 5\nI0123 11:09:09.624346 302 log.go:172] (0xc000884000) (5) Data frame handling\nI0123 11:09:09.624421 302 log.go:172] (0xc000884000) (5) Data frame sent\nI0123 11:09:11.141408 302 log.go:172] (0xc000138790) (0xc0008641e0) Stream removed, broadcasting: 7\nI0123 11:09:11.141565 302 log.go:172] (0xc000138790) Data frame received for 1\nI0123 11:09:11.141619 302 log.go:172] (0xc000138790) (0xc000595e00) Stream removed, broadcasting: 3\nI0123 11:09:11.141720 302 log.go:172] (0xc000864140) (1) Data frame handling\nI0123 11:09:11.141748 302 log.go:172] (0xc000864140) (1) Data frame sent\nI0123 11:09:11.141783 302 log.go:172] (0xc000138790) (0xc000864140) Stream removed, broadcasting: 1\nI0123 11:09:11.142081 302 log.go:172] (0xc000138790) (0xc000884000) Stream removed, broadcasting: 5\nI0123 11:09:11.142143 302 log.go:172] (0xc000138790) (0xc000864140) Stream removed, broadcasting: 1\nI0123 11:09:11.142162 302 log.go:172] (0xc000138790) (0xc000595e00) Stream removed, broadcasting: 3\nI0123 11:09:11.142179 302 log.go:172] (0xc000138790) (0xc000884000) Stream removed, broadcasting: 5\nI0123 11:09:11.142202 302 log.go:172] (0xc000138790) (0xc0008641e0) Stream removed, broadcasting: 7\nI0123 11:09:11.147625 302 log.go:172] (0xc000138790) Go away received\n" Jan 23 11:09:11.216: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:09:13.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcxvr" for this suite. Jan 23 11:09:19.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:09:19.574: INFO: namespace: e2e-tests-kubectl-bcxvr, resource: bindings, ignored listing per whitelist Jan 23 11:09:19.693: INFO: namespace e2e-tests-kubectl-bcxvr deletion completed in 6.454738784s • [SLOW TEST:21.567 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:09:19.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-cda0241e-3dd0-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:09:20.296: INFO: Waiting up to 5m0s for pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-nxqjl" to be "success or failure" Jan 23 11:09:20.443: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 146.955333ms Jan 23 11:09:22.478: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18200169s Jan 23 11:09:24.501: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205254644s Jan 23 11:09:26.525: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22882243s Jan 23 11:09:28.547: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251600992s Jan 23 11:09:30.760: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.464085528s STEP: Saw pod success Jan 23 11:09:30.760: INFO: Pod "pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:09:30.786: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 11:09:30.975: INFO: Waiting for pod pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:09:30.991: INFO: Pod pod-secrets-cdbc1b5a-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:09:30.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nxqjl" for this suite. Jan 23 11:09:37.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:09:37.139: INFO: namespace: e2e-tests-secrets-nxqjl, resource: bindings, ignored listing per whitelist Jan 23 11:09:37.222: INFO: namespace e2e-tests-secrets-nxqjl deletion completed in 6.223147355s STEP: Destroying namespace "e2e-tests-secret-namespace-nzwm2" for this suite. Jan 23 11:09:43.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:09:43.525: INFO: namespace: e2e-tests-secret-namespace-nzwm2, resource: bindings, ignored listing per whitelist Jan 23 11:09:43.543: INFO: namespace e2e-tests-secret-namespace-nzwm2 deletion completed in 6.320075726s • [SLOW TEST:23.849 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:09:43.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-dbc36dde-3dd0-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:09:43.797: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-ltl7s" to be "success or failure" Jan 23 11:09:43.803: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537069ms Jan 23 11:09:45.832: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034966316s Jan 23 11:09:47.850: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053021807s Jan 23 11:09:49.871: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074416176s Jan 23 11:09:51.886: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089636637s Jan 23 11:09:54.078: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.280960266s STEP: Saw pod success Jan 23 11:09:54.078: INFO: Pod "pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:09:54.088: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 23 11:09:54.649: INFO: Waiting for pod pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:09:54.683: INFO: Pod pod-configmaps-dbc48420-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:09:54.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ltl7s" for this suite. Jan 23 11:10:00.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:10:00.950: INFO: namespace: e2e-tests-configmap-ltl7s, resource: bindings, ignored listing per whitelist Jan 23 11:10:01.023: INFO: namespace e2e-tests-configmap-ltl7s deletion completed in 6.309694484s • [SLOW TEST:17.480 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:10:01.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 23 11:10:01.387: INFO: Waiting up to 5m0s for pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-jghmt" to be "success or failure" Jan 23 11:10:01.664: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 277.063042ms Jan 23 11:10:03.780: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392842293s Jan 23 11:10:05.792: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404972108s Jan 23 11:10:07.807: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42037172s Jan 23 11:10:09.834: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446900731s Jan 23 11:10:11.846: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459649721s STEP: Saw pod success Jan 23 11:10:11.846: INFO: Pod "pod-e63690c3-3dd0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:10:11.851: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e63690c3-3dd0-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:10:12.390: INFO: Waiting for pod pod-e63690c3-3dd0-11ea-bb65-0242ac110005 to disappear Jan 23 11:10:12.409: INFO: Pod pod-e63690c3-3dd0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:10:12.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jghmt" for this suite. Jan 23 11:10:18.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:10:18.645: INFO: namespace: e2e-tests-emptydir-jghmt, resource: bindings, ignored listing per whitelist Jan 23 11:10:18.872: INFO: namespace e2e-tests-emptydir-jghmt deletion completed in 6.451205408s • [SLOW TEST:17.849 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:10:18.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 23 11:10:37.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 11:10:37.219: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 11:10:39.220: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 11:10:39.238: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 11:10:41.219: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 11:10:41.325: INFO: Pod pod-with-prestop-http-hook still exists Jan 23 11:10:43.219: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 23 11:10:43.234: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:10:43.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zfl8l" for this suite. Jan 23 11:11:07.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:11:07.372: INFO: namespace: e2e-tests-container-lifecycle-hook-zfl8l, resource: bindings, ignored listing per whitelist Jan 23 11:11:07.492: INFO: namespace e2e-tests-container-lifecycle-hook-zfl8l deletion completed in 24.210838361s • [SLOW TEST:48.620 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:11:07.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jan 23 11:11:07.722: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-czhk5" to be "success or failure" Jan 23 11:11:07.778: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 55.669205ms Jan 23 11:11:09.788: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065916753s Jan 23 11:11:11.807: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084832116s Jan 23 11:11:13.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096597389s Jan 23 11:11:15.885: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162864137s Jan 23 11:11:17.906: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183405565s Jan 23 11:11:19.922: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.200183772s Jan 23 11:11:21.968: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.245652835s STEP: Saw pod success Jan 23 11:11:21.968: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 23 11:11:21.997: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 23 11:11:22.155: INFO: Waiting for pod pod-host-path-test to disappear Jan 23 11:11:22.225: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:11:22.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-czhk5" for this suite. Jan 23 11:11:28.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:11:28.420: INFO: namespace: e2e-tests-hostpath-czhk5, resource: bindings, ignored listing per whitelist Jan 23 11:11:28.548: INFO: namespace e2e-tests-hostpath-czhk5 deletion completed in 6.312177864s • [SLOW TEST:21.055 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:11:28.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 23 11:11:39.739: INFO: Successfully updated pod "annotationupdate1a609f52-3dd1-11ea-bb65-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:11:41.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hq5z4" for this suite. Jan 23 11:12:06.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:12:06.279: INFO: namespace: e2e-tests-projected-hq5z4, resource: bindings, ignored listing per whitelist Jan 23 11:12:06.286: INFO: namespace e2e-tests-projected-hq5z4 deletion completed in 24.357125283s • [SLOW TEST:37.737 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:12:06.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:12:06.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-c5b4q" to be "success or failure" Jan 23 11:12:06.498: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.412681ms Jan 23 11:12:08.867: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383857052s Jan 23 11:12:10.881: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397583402s Jan 23 11:12:13.105: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62142612s Jan 23 11:12:15.383: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.899339227s Jan 23 11:12:17.511: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.02799165s STEP: Saw pod success Jan 23 11:12:17.511: INFO: Pod "downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:12:17.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:12:17.889: INFO: Waiting for pod downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:12:17.911: INFO: Pod downwardapi-volume-30cf1941-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:12:17.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c5b4q" for this suite. Jan 23 11:12:24.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:12:24.063: INFO: namespace: e2e-tests-projected-c5b4q, resource: bindings, ignored listing per whitelist Jan 23 11:12:24.151: INFO: namespace e2e-tests-projected-c5b4q deletion completed in 6.225353928s • [SLOW TEST:17.865 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:12:24.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-p98gw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p98gw to expose endpoints map[] Jan 23 11:12:24.620: INFO: Get endpoints failed (84.338256ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 23 11:12:25.643: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p98gw exposes endpoints map[] (1.107099413s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-p98gw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p98gw to expose endpoints map[pod1:[100]] Jan 23 11:12:31.400: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.726334812s elapsed, will retry) Jan 23 11:12:35.480: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p98gw exposes endpoints map[pod1:[100]] (9.805926364s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-p98gw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p98gw to expose endpoints map[pod1:[100] pod2:[101]] Jan 23 11:12:39.716: INFO: Unexpected endpoints: found map[3c41d3e9-3dd1-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.226606607s elapsed, will retry) Jan 23 11:12:44.039: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p98gw exposes endpoints map[pod1:[100] pod2:[101]] (8.549166545s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-p98gw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p98gw to expose endpoints map[pod2:[101]] Jan 23 11:12:45.216: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p98gw exposes endpoints map[pod2:[101]] (1.131487774s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-p98gw STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p98gw to expose endpoints map[] Jan 23 11:12:46.297: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p98gw exposes endpoints map[] (1.06928392s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:12:48.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-p98gw" for this suite. Jan 23 11:13:12.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:13:12.746: INFO: namespace: e2e-tests-services-p98gw, resource: bindings, ignored listing per whitelist Jan 23 11:13:12.802: INFO: namespace e2e-tests-services-p98gw deletion completed in 24.467126385s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.650 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:13:12.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:13:25.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6qch6" for this suite. Jan 23 11:13:31.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:13:31.300: INFO: namespace: e2e-tests-kubelet-test-6qch6, resource: bindings, ignored listing per whitelist Jan 23 11:13:31.377: INFO: namespace e2e-tests-kubelet-test-6qch6 deletion completed in 6.182384025s • [SLOW TEST:18.575 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:13:31.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:13:31.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-g7ch7" to be "success or failure" Jan 23 11:13:31.761: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.176635ms Jan 23 11:13:34.140: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400060217s Jan 23 11:13:36.163: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.423549144s Jan 23 11:13:38.179: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439652242s Jan 23 11:13:40.206: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.466380073s Jan 23 11:13:42.244: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504564408s STEP: Saw pod success Jan 23 11:13:42.244: INFO: Pod "downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:13:42.253: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:13:42.306: INFO: Waiting for pod downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:13:42.314: INFO: Pod downwardapi-volume-639fb248-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:13:42.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g7ch7" for this suite. Jan 23 11:13:48.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:13:48.528: INFO: namespace: e2e-tests-downward-api-g7ch7, resource: bindings, ignored listing per whitelist Jan 23 11:13:48.605: INFO: namespace e2e-tests-downward-api-g7ch7 deletion completed in 6.285023317s • [SLOW TEST:17.228 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:13:48.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:13:48.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-x2zdg" to be "success or failure" Jan 23 11:13:48.946: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 119.721135ms Jan 23 11:13:50.958: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131505741s Jan 23 11:13:52.974: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147975894s Jan 23 11:13:55.019: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193154765s Jan 23 11:13:57.030: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203860771s Jan 23 11:13:59.043: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217009249s STEP: Saw pod success Jan 23 11:13:59.043: INFO: Pod "downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:13:59.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:13:59.194: INFO: Waiting for pod downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:13:59.214: INFO: Pod downwardapi-volume-6dd175b4-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:13:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x2zdg" for this suite. Jan 23 11:14:07.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:14:07.324: INFO: namespace: e2e-tests-projected-x2zdg, resource: bindings, ignored listing per whitelist Jan 23 11:14:07.438: INFO: namespace e2e-tests-projected-x2zdg deletion completed in 8.219224467s • [SLOW TEST:18.833 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:14:07.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-790f3957-3dd1-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:14:07.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-wrtk7" to be "success or failure" Jan 23 11:14:07.721: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.984603ms Jan 23 11:14:09.738: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033559444s Jan 23 11:14:11.754: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049784231s Jan 23 11:14:13.849: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144253482s Jan 23 11:14:15.872: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168046544s Jan 23 11:14:17.892: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187683859s STEP: Saw pod success Jan 23 11:14:17.892: INFO: Pod "pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:14:17.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 11:14:18.017: INFO: Waiting for pod pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:14:18.029: INFO: Pod pod-projected-configmaps-7910c794-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:14:18.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wrtk7" for this suite. Jan 23 11:14:24.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:14:24.224: INFO: namespace: e2e-tests-projected-wrtk7, resource: bindings, ignored listing per whitelist Jan 23 11:14:24.235: INFO: namespace e2e-tests-projected-wrtk7 deletion completed in 6.194601626s • [SLOW TEST:16.796 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:14:24.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:14:24.586: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 23 11:14:24.668: INFO: Number of nodes with available pods: 0 Jan 23 11:14:24.668: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 23 11:14:24.760: INFO: Number of nodes with available pods: 0 Jan 23 11:14:24.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:25.772: INFO: Number of nodes with available pods: 0 Jan 23 11:14:25.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:26.800: INFO: Number of nodes with available pods: 0 Jan 23 11:14:26.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:27.774: INFO: Number of nodes with available pods: 0 Jan 23 11:14:27.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:28.779: INFO: Number of nodes with available pods: 0 Jan 23 11:14:28.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:29.775: INFO: Number of nodes with available pods: 0 Jan 23 11:14:29.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:30.955: INFO: Number of nodes with available pods: 0 Jan 23 11:14:30.955: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:31.772: INFO: Number of nodes with available pods: 0 Jan 23 11:14:31.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:32.771: INFO: Number of nodes with available pods: 0 Jan 23 11:14:32.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:33.787: INFO: Number of nodes with available pods: 1 Jan 23 11:14:33.788: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 23 11:14:33.862: INFO: Number of nodes with available pods: 1 Jan 23 11:14:33.862: INFO: Number of running nodes: 0, number of available pods: 1 Jan 23 11:14:34.895: INFO: Number of nodes with available pods: 0 Jan 23 11:14:34.895: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 23 11:14:34.939: INFO: Number of nodes with available pods: 0 Jan 23 11:14:34.939: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:35.993: INFO: Number of nodes with available pods: 0 Jan 23 11:14:35.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:37.149: INFO: Number of nodes with available pods: 0 Jan 23 11:14:37.150: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:37.960: INFO: Number of nodes with available pods: 0 Jan 23 11:14:37.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:38.958: INFO: Number of nodes with available pods: 0 Jan 23 11:14:38.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:39.956: INFO: Number of nodes with available pods: 0 Jan 23 11:14:39.956: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:40.955: INFO: Number of nodes with available pods: 0 Jan 23 11:14:40.955: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:41.958: INFO: Number of nodes with available pods: 0 Jan 23 11:14:41.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:42.952: INFO: Number of nodes with available pods: 0 Jan 23 11:14:42.952: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:44.192: INFO: Number of nodes with available pods: 0 Jan 23 11:14:44.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:44.959: INFO: Number of nodes with available pods: 0 Jan 23 11:14:44.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:45.967: INFO: Number of nodes with available pods: 0 Jan 23 11:14:45.967: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:46.986: INFO: Number of nodes with available pods: 0 Jan 23 11:14:46.986: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:48.562: INFO: Number of nodes with available pods: 0 Jan 23 11:14:48.563: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:48.961: INFO: Number of nodes with available pods: 0 Jan 23 11:14:48.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:50.014: INFO: Number of nodes with available pods: 0 Jan 23 11:14:50.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:50.956: INFO: Number of nodes with available pods: 0 Jan 23 11:14:50.956: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:14:51.971: INFO: Number of nodes with available pods: 1 Jan 23 11:14:51.971: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-m2l7t, will wait for the garbage collector to delete the pods Jan 23 11:14:52.072: INFO: Deleting DaemonSet.extensions daemon-set took: 23.440072ms Jan 23 11:14:52.372: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.559739ms Jan 23 11:14:59.228: INFO: Number of nodes with available pods: 0 Jan 23 11:14:59.229: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 11:14:59.257: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-m2l7t/daemonsets","resourceVersion":"19177840"},"items":null} Jan 23 11:14:59.264: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-m2l7t/pods","resourceVersion":"19177840"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:14:59.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-m2l7t" for this suite. Jan 23 11:15:05.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:15:05.540: INFO: namespace: e2e-tests-daemonsets-m2l7t, resource: bindings, ignored listing per whitelist Jan 23 11:15:05.682: INFO: namespace e2e-tests-daemonsets-m2l7t deletion completed in 6.362833909s • [SLOW TEST:41.447 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:15:05.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 23 11:15:17.128: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:15:18.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-9xb97" for this suite. Jan 23 11:15:45.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:15:45.168: INFO: namespace: e2e-tests-replicaset-9xb97, resource: bindings, ignored listing per whitelist Jan 23 11:15:45.339: INFO: namespace e2e-tests-replicaset-9xb97 deletion completed in 27.157260392s • [SLOW TEST:39.658 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:15:45.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 23 11:15:45.531: INFO: Waiting up to 5m0s for pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-2x7wr" to be "success or failure" Jan 23 11:15:45.542: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.982362ms Jan 23 11:15:47.560: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02943557s Jan 23 11:15:49.581: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050330671s Jan 23 11:15:51.596: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064926544s Jan 23 11:15:53.637: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106189535s Jan 23 11:15:56.431: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.900215753s STEP: Saw pod success Jan 23 11:15:56.431: INFO: Pod "pod-b3610008-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:15:56.462: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b3610008-3dd1-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:15:57.033: INFO: Waiting for pod pod-b3610008-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:15:57.046: INFO: Pod pod-b3610008-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:15:57.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2x7wr" for this suite. Jan 23 11:16:03.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:16:03.287: INFO: namespace: e2e-tests-emptydir-2x7wr, resource: bindings, ignored listing per whitelist Jan 23 11:16:03.349: INFO: namespace e2e-tests-emptydir-2x7wr deletion completed in 6.291008996s • [SLOW TEST:18.009 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:16:03.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-wcjgk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wcjgk to expose endpoints map[] Jan 23 11:16:03.599: INFO: Get endpoints failed (17.006906ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 23 11:16:04.615: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wcjgk exposes endpoints map[] (1.032960948s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-wcjgk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wcjgk to expose endpoints map[pod1:[80]] Jan 23 11:16:09.732: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.102827069s elapsed, will retry) Jan 23 11:16:13.026: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wcjgk exposes endpoints map[pod1:[80]] (8.397054833s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-wcjgk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wcjgk to expose endpoints map[pod1:[80] pod2:[80]] Jan 23 11:16:17.517: INFO: Unexpected endpoints: found map[bec4a262-3dd1-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (4.397265698s elapsed, will retry) Jan 23 11:16:23.531: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wcjgk exposes endpoints map[pod1:[80] pod2:[80]] (10.410382537s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-wcjgk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wcjgk to expose endpoints map[pod2:[80]] Jan 23 11:16:24.595: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wcjgk exposes endpoints map[pod2:[80]] (1.037377399s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-wcjgk STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wcjgk to expose endpoints map[] Jan 23 11:16:25.789: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wcjgk exposes endpoints map[] (1.186139398s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:16:26.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-wcjgk" for this suite. Jan 23 11:16:50.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:16:50.645: INFO: namespace: e2e-tests-services-wcjgk, resource: bindings, ignored listing per whitelist Jan 23 11:16:50.697: INFO: namespace e2e-tests-services-wcjgk deletion completed in 24.325620712s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.348 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:16:50.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-da4ee685-3dd1-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:16:50.933: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-q4zmg" to be "success or failure" Jan 23 11:16:50.941: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.50997ms Jan 23 11:16:52.963: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029103831s Jan 23 11:16:54.989: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055443904s Jan 23 11:16:57.003: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069822165s Jan 23 11:16:59.018: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.084028096s Jan 23 11:17:01.046: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112311145s STEP: Saw pod success Jan 23 11:17:01.046: INFO: Pod "pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:17:01.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 11:17:01.184: INFO: Waiting for pod pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:17:01.194: INFO: Pod pod-projected-secrets-da5bf103-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:17:01.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q4zmg" for this suite. Jan 23 11:17:09.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:17:09.375: INFO: namespace: e2e-tests-projected-q4zmg, resource: bindings, ignored listing per whitelist Jan 23 11:17:09.483: INFO: namespace e2e-tests-projected-q4zmg deletion completed in 8.234927594s • [SLOW TEST:18.786 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:17:09.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:17:09.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-k8qcm" to be "success or failure" Jan 23 11:17:09.684: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718393ms Jan 23 11:17:11.973: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29735543s Jan 23 11:17:13.989: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313721163s Jan 23 11:17:16.004: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328856958s Jan 23 11:17:18.158: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483001935s Jan 23 11:17:20.345: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.669676419s STEP: Saw pod success Jan 23 11:17:20.345: INFO: Pod "downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:17:20.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:17:20.664: INFO: Waiting for pod downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005 to disappear Jan 23 11:17:20.692: INFO: Pod downwardapi-volume-e58766ab-3dd1-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:17:20.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k8qcm" for this suite. Jan 23 11:17:26.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:17:26.972: INFO: namespace: e2e-tests-projected-k8qcm, resource: bindings, ignored listing per whitelist Jan 23 11:17:27.156: INFO: namespace e2e-tests-projected-k8qcm deletion completed in 6.450190854s • [SLOW TEST:17.672 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:17:27.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-4hj6l I0123 11:17:27.608872 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-4hj6l, replica count: 1 I0123 11:17:28.660056 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:29.660310 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:30.660695 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:31.661226 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:32.661702 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:33.662036 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:34.662492 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 11:17:35.662830 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 11:17:35.826: INFO: Created: latency-svc-fr85s Jan 23 11:17:35.989: INFO: Got endpoints: latency-svc-fr85s [226.354183ms] Jan 23 11:17:36.176: INFO: Created: latency-svc-4pps4 Jan 23 11:17:36.232: INFO: Created: latency-svc-m85p2 Jan 23 11:17:36.247: INFO: Got endpoints: latency-svc-4pps4 [257.884464ms] Jan 23 11:17:36.340: INFO: Got endpoints: latency-svc-m85p2 [350.193296ms] Jan 23 11:17:36.363: INFO: Created: latency-svc-8ftbg Jan 23 11:17:36.385: INFO: Got endpoints: latency-svc-8ftbg [394.787557ms] Jan 23 11:17:36.424: INFO: Created: latency-svc-jk8qd Jan 23 11:17:36.552: INFO: Got endpoints: latency-svc-jk8qd [563.100534ms] Jan 23 11:17:36.604: INFO: Created: latency-svc-6nq8r Jan 23 11:17:36.607: INFO: Got endpoints: latency-svc-6nq8r [616.946274ms] Jan 23 11:17:36.641: INFO: Created: latency-svc-6njql Jan 23 11:17:36.755: INFO: Got endpoints: latency-svc-6njql [765.847554ms] Jan 23 11:17:36.785: INFO: Created: latency-svc-q4vm8 Jan 23 11:17:36.800: INFO: Got endpoints: latency-svc-q4vm8 [809.407066ms] Jan 23 11:17:36.859: INFO: Created: latency-svc-z8wkn Jan 23 11:17:37.092: INFO: Created: latency-svc-btmnj Jan 23 11:17:37.095: INFO: Got endpoints: latency-svc-z8wkn [1.104574737s] Jan 23 11:17:37.112: INFO: Got endpoints: latency-svc-btmnj [1.122020993s] Jan 23 11:17:37.318: INFO: Created: latency-svc-tjzd8 Jan 23 11:17:37.399: INFO: Got endpoints: latency-svc-tjzd8 [1.408715321s] Jan 23 11:17:37.411: INFO: Created: latency-svc-6dwv7 Jan 23 11:17:37.626: INFO: Got endpoints: latency-svc-6dwv7 [1.635784216s] Jan 23 11:17:37.687: INFO: Created: latency-svc-kvsn2 Jan 23 11:17:37.701: INFO: Got endpoints: latency-svc-kvsn2 [1.71018293s] Jan 23 11:17:37.907: INFO: Created: latency-svc-mlsnh Jan 23 11:17:37.955: INFO: Got endpoints: latency-svc-mlsnh [1.965529483s] Jan 23 11:17:38.091: INFO: Created: latency-svc-mx4qh Jan 23 11:17:38.110: INFO: Got endpoints: latency-svc-mx4qh [2.119808767s] Jan 23 11:17:38.157: INFO: Created: latency-svc-25gx4 Jan 23 11:17:38.294: INFO: Got endpoints: latency-svc-25gx4 [2.30422201s] Jan 23 11:17:38.322: INFO: Created: latency-svc-7rwsc Jan 23 11:17:38.357: INFO: Got endpoints: latency-svc-7rwsc [2.109647672s] Jan 23 11:17:38.480: INFO: Created: latency-svc-cvxt6 Jan 23 11:17:38.490: INFO: Got endpoints: latency-svc-cvxt6 [2.149384701s] Jan 23 11:17:38.617: INFO: Created: latency-svc-nlsxb Jan 23 11:17:38.704: INFO: Got endpoints: latency-svc-nlsxb [2.319058041s] Jan 23 11:17:38.755: INFO: Created: latency-svc-ph2mp Jan 23 11:17:38.758: INFO: Got endpoints: latency-svc-ph2mp [2.205432755s] Jan 23 11:17:38.927: INFO: Created: latency-svc-wctjp Jan 23 11:17:38.953: INFO: Got endpoints: latency-svc-wctjp [2.345565269s] Jan 23 11:17:39.244: INFO: Created: latency-svc-gnd6s Jan 23 11:17:39.264: INFO: Got endpoints: latency-svc-gnd6s [2.508283175s] Jan 23 11:17:39.316: INFO: Created: latency-svc-lvmvb Jan 23 11:17:39.331: INFO: Got endpoints: latency-svc-lvmvb [2.53134407s] Jan 23 11:17:39.456: INFO: Created: latency-svc-sfz8t Jan 23 11:17:39.476: INFO: Got endpoints: latency-svc-sfz8t [2.380668843s] Jan 23 11:17:39.651: INFO: Created: latency-svc-2lkhv Jan 23 11:17:39.683: INFO: Got endpoints: latency-svc-2lkhv [2.570532525s] Jan 23 11:17:39.833: INFO: Created: latency-svc-kc4t5 Jan 23 11:17:39.883: INFO: Got endpoints: latency-svc-kc4t5 [2.4843604s] Jan 23 11:17:39.921: INFO: Created: latency-svc-l9hwv Jan 23 11:17:40.063: INFO: Got endpoints: latency-svc-l9hwv [2.436053502s] Jan 23 11:17:40.091: INFO: Created: latency-svc-cv5p9 Jan 23 11:17:40.104: INFO: Got endpoints: latency-svc-cv5p9 [2.402741068s] Jan 23 11:17:40.269: INFO: Created: latency-svc-kljw4 Jan 23 11:17:40.288: INFO: Got endpoints: latency-svc-kljw4 [2.33298108s] Jan 23 11:17:40.373: INFO: Created: latency-svc-qzw4h Jan 23 11:17:40.487: INFO: Got endpoints: latency-svc-qzw4h [2.376515618s] Jan 23 11:17:40.675: INFO: Created: latency-svc-876hl Jan 23 11:17:40.717: INFO: Got endpoints: latency-svc-876hl [2.423180345s] Jan 23 11:17:40.845: INFO: Created: latency-svc-9qkmx Jan 23 11:17:40.896: INFO: Got endpoints: latency-svc-9qkmx [2.538396013s] Jan 23 11:17:41.070: INFO: Created: latency-svc-7cwnt Jan 23 11:17:41.089: INFO: Got endpoints: latency-svc-7cwnt [2.599260402s] Jan 23 11:17:41.246: INFO: Created: latency-svc-gmn5f Jan 23 11:17:41.262: INFO: Got endpoints: latency-svc-gmn5f [2.557641959s] Jan 23 11:17:41.332: INFO: Created: latency-svc-2x5f5 Jan 23 11:17:41.463: INFO: Got endpoints: latency-svc-2x5f5 [2.70457689s] Jan 23 11:17:41.515: INFO: Created: latency-svc-bsj5k Jan 23 11:17:41.553: INFO: Got endpoints: latency-svc-bsj5k [2.60014215s] Jan 23 11:17:41.679: INFO: Created: latency-svc-gbs9r Jan 23 11:17:41.701: INFO: Got endpoints: latency-svc-gbs9r [2.436832584s] Jan 23 11:17:41.986: INFO: Created: latency-svc-8gjtg Jan 23 11:17:41.986: INFO: Got endpoints: latency-svc-8gjtg [2.654741926s] Jan 23 11:17:42.299: INFO: Created: latency-svc-7qnf8 Jan 23 11:17:42.337: INFO: Got endpoints: latency-svc-7qnf8 [2.860518889s] Jan 23 11:17:42.458: INFO: Created: latency-svc-5snwj Jan 23 11:17:42.470: INFO: Got endpoints: latency-svc-5snwj [2.786865612s] Jan 23 11:17:42.553: INFO: Created: latency-svc-kc6bx Jan 23 11:17:42.698: INFO: Got endpoints: latency-svc-kc6bx [2.815035086s] Jan 23 11:17:42.725: INFO: Created: latency-svc-brwmh Jan 23 11:17:42.774: INFO: Got endpoints: latency-svc-brwmh [2.711138198s] Jan 23 11:17:42.880: INFO: Created: latency-svc-tp95r Jan 23 11:17:42.917: INFO: Got endpoints: latency-svc-tp95r [2.813527021s] Jan 23 11:17:42.969: INFO: Created: latency-svc-cl9c6 Jan 23 11:17:43.081: INFO: Got endpoints: latency-svc-cl9c6 [2.792472291s] Jan 23 11:17:43.093: INFO: Created: latency-svc-xf9xd Jan 23 11:17:43.106: INFO: Got endpoints: latency-svc-xf9xd [2.61824658s] Jan 23 11:17:43.163: INFO: Created: latency-svc-5xkj9 Jan 23 11:17:43.256: INFO: Got endpoints: latency-svc-5xkj9 [2.538768711s] Jan 23 11:17:43.292: INFO: Created: latency-svc-rqphw Jan 23 11:17:43.305: INFO: Got endpoints: latency-svc-rqphw [2.408687322s] Jan 23 11:17:43.473: INFO: Created: latency-svc-hvvqk Jan 23 11:17:43.483: INFO: Got endpoints: latency-svc-hvvqk [2.393693732s] Jan 23 11:17:43.533: INFO: Created: latency-svc-bjgwb Jan 23 11:17:43.547: INFO: Got endpoints: latency-svc-bjgwb [2.284850658s] Jan 23 11:17:43.673: INFO: Created: latency-svc-wp4wm Jan 23 11:17:43.685: INFO: Got endpoints: latency-svc-wp4wm [2.2217917s] Jan 23 11:17:43.974: INFO: Created: latency-svc-4qw77 Jan 23 11:17:43.996: INFO: Got endpoints: latency-svc-4qw77 [2.442410424s] Jan 23 11:17:44.087: INFO: Created: latency-svc-997ht Jan 23 11:17:44.261: INFO: Got endpoints: latency-svc-997ht [2.560182007s] Jan 23 11:17:44.302: INFO: Created: latency-svc-q2tr7 Jan 23 11:17:44.337: INFO: Created: latency-svc-t4rn6 Jan 23 11:17:44.342: INFO: Got endpoints: latency-svc-q2tr7 [2.3557244s] Jan 23 11:17:44.356: INFO: Got endpoints: latency-svc-t4rn6 [2.018737832s] Jan 23 11:17:44.508: INFO: Created: latency-svc-tmb5x Jan 23 11:17:44.543: INFO: Got endpoints: latency-svc-tmb5x [2.07261119s] Jan 23 11:17:44.706: INFO: Created: latency-svc-8gf7w Jan 23 11:17:44.756: INFO: Got endpoints: latency-svc-8gf7w [2.057498988s] Jan 23 11:17:44.941: INFO: Created: latency-svc-qrfm4 Jan 23 11:17:44.988: INFO: Got endpoints: latency-svc-qrfm4 [2.213327589s] Jan 23 11:17:45.181: INFO: Created: latency-svc-c97qg Jan 23 11:17:45.206: INFO: Got endpoints: latency-svc-c97qg [2.288152173s] Jan 23 11:17:45.399: INFO: Created: latency-svc-7c7lw Jan 23 11:17:45.405: INFO: Got endpoints: latency-svc-7c7lw [2.324309016s] Jan 23 11:17:45.454: INFO: Created: latency-svc-qls4z Jan 23 11:17:45.565: INFO: Got endpoints: latency-svc-qls4z [2.459767874s] Jan 23 11:17:45.584: INFO: Created: latency-svc-pz88l Jan 23 11:17:45.599: INFO: Got endpoints: latency-svc-pz88l [2.342048083s] Jan 23 11:17:45.666: INFO: Created: latency-svc-xpghv Jan 23 11:17:45.800: INFO: Got endpoints: latency-svc-xpghv [2.495273128s] Jan 23 11:17:45.866: INFO: Created: latency-svc-pb5br Jan 23 11:17:45.870: INFO: Got endpoints: latency-svc-pb5br [2.386946277s] Jan 23 11:17:46.025: INFO: Created: latency-svc-jmjr4 Jan 23 11:17:46.123: INFO: Created: latency-svc-69q8d Jan 23 11:17:46.230: INFO: Got endpoints: latency-svc-jmjr4 [2.682464299s] Jan 23 11:17:46.230: INFO: Got endpoints: latency-svc-69q8d [2.544475885s] Jan 23 11:17:46.291: INFO: Created: latency-svc-sv25g Jan 23 11:17:46.323: INFO: Got endpoints: latency-svc-sv25g [2.326999374s] Jan 23 11:17:46.511: INFO: Created: latency-svc-xdtdj Jan 23 11:17:46.574: INFO: Got endpoints: latency-svc-xdtdj [2.312615977s] Jan 23 11:17:46.773: INFO: Created: latency-svc-gg726 Jan 23 11:17:46.805: INFO: Got endpoints: latency-svc-gg726 [2.463379412s] Jan 23 11:17:47.039: INFO: Created: latency-svc-2t7dz Jan 23 11:17:47.061: INFO: Got endpoints: latency-svc-2t7dz [2.705296123s] Jan 23 11:17:47.273: INFO: Created: latency-svc-2k9rq Jan 23 11:17:47.303: INFO: Got endpoints: latency-svc-2k9rq [2.760689988s] Jan 23 11:17:47.665: INFO: Created: latency-svc-f595d Jan 23 11:17:47.881: INFO: Got endpoints: latency-svc-f595d [3.124745551s] Jan 23 11:17:47.961: INFO: Created: latency-svc-8ncnq Jan 23 11:17:48.258: INFO: Got endpoints: latency-svc-8ncnq [3.269522251s] Jan 23 11:17:48.522: INFO: Created: latency-svc-xzvww Jan 23 11:17:48.605: INFO: Got endpoints: latency-svc-xzvww [3.398645578s] Jan 23 11:17:48.705: INFO: Created: latency-svc-7jqn7 Jan 23 11:17:48.717: INFO: Got endpoints: latency-svc-7jqn7 [3.311327351s] Jan 23 11:17:48.788: INFO: Created: latency-svc-dvd76 Jan 23 11:17:48.899: INFO: Got endpoints: latency-svc-dvd76 [3.333824427s] Jan 23 11:17:48.910: INFO: Created: latency-svc-xnkm8 Jan 23 11:17:48.925: INFO: Got endpoints: latency-svc-xnkm8 [3.326102762s] Jan 23 11:17:49.002: INFO: Created: latency-svc-vs8fg Jan 23 11:17:49.144: INFO: Got endpoints: latency-svc-vs8fg [3.343924959s] Jan 23 11:17:49.188: INFO: Created: latency-svc-jnt7d Jan 23 11:17:49.239: INFO: Got endpoints: latency-svc-jnt7d [3.368819684s] Jan 23 11:17:49.403: INFO: Created: latency-svc-tknvq Jan 23 11:17:49.411: INFO: Got endpoints: latency-svc-tknvq [3.180924198s] Jan 23 11:17:49.494: INFO: Created: latency-svc-v9w9n Jan 23 11:17:49.609: INFO: Got endpoints: latency-svc-v9w9n [3.378905642s] Jan 23 11:17:49.704: INFO: Created: latency-svc-f78l5 Jan 23 11:17:49.712: INFO: Got endpoints: latency-svc-f78l5 [3.388995041s] Jan 23 11:17:49.923: INFO: Created: latency-svc-mszns Jan 23 11:17:49.955: INFO: Got endpoints: latency-svc-mszns [3.380335841s] Jan 23 11:17:50.187: INFO: Created: latency-svc-bbv6t Jan 23 11:17:50.195: INFO: Got endpoints: latency-svc-bbv6t [3.389034048s] Jan 23 11:17:50.413: INFO: Created: latency-svc-wxd5x Jan 23 11:17:50.435: INFO: Got endpoints: latency-svc-wxd5x [3.373435849s] Jan 23 11:17:50.642: INFO: Created: latency-svc-trq79 Jan 23 11:17:50.869: INFO: Got endpoints: latency-svc-trq79 [3.565170445s] Jan 23 11:17:50.904: INFO: Created: latency-svc-hjfgq Jan 23 11:17:50.948: INFO: Got endpoints: latency-svc-hjfgq [3.066901191s] Jan 23 11:17:51.093: INFO: Created: latency-svc-x86dp Jan 23 11:17:51.154: INFO: Got endpoints: latency-svc-x86dp [2.89578794s] Jan 23 11:17:51.163: INFO: Created: latency-svc-svtvt Jan 23 11:17:51.271: INFO: Got endpoints: latency-svc-svtvt [2.666080318s] Jan 23 11:17:51.309: INFO: Created: latency-svc-wbd9k Jan 23 11:17:51.335: INFO: Got endpoints: latency-svc-wbd9k [2.618332655s] Jan 23 11:17:51.479: INFO: Created: latency-svc-g7z8v Jan 23 11:17:51.502: INFO: Got endpoints: latency-svc-g7z8v [2.602265994s] Jan 23 11:17:51.575: INFO: Created: latency-svc-tsfc9 Jan 23 11:17:51.672: INFO: Got endpoints: latency-svc-tsfc9 [2.746461664s] Jan 23 11:17:51.696: INFO: Created: latency-svc-tlvlv Jan 23 11:17:51.717: INFO: Got endpoints: latency-svc-tlvlv [2.572645384s] Jan 23 11:17:51.896: INFO: Created: latency-svc-swdmd Jan 23 11:17:51.900: INFO: Got endpoints: latency-svc-swdmd [2.660403441s] Jan 23 11:17:52.097: INFO: Created: latency-svc-x65pg Jan 23 11:17:52.114: INFO: Got endpoints: latency-svc-x65pg [2.702976844s] Jan 23 11:17:52.306: INFO: Created: latency-svc-f64hg Jan 23 11:17:52.367: INFO: Got endpoints: latency-svc-f64hg [2.758335095s] Jan 23 11:17:52.484: INFO: Created: latency-svc-8wzs9 Jan 23 11:17:52.508: INFO: Got endpoints: latency-svc-8wzs9 [2.79533067s] Jan 23 11:17:52.664: INFO: Created: latency-svc-rhthh Jan 23 11:17:52.673: INFO: Got endpoints: latency-svc-rhthh [2.7176649s] Jan 23 11:17:52.756: INFO: Created: latency-svc-drfvx Jan 23 11:17:52.870: INFO: Got endpoints: latency-svc-drfvx [2.674707277s] Jan 23 11:17:52.901: INFO: Created: latency-svc-zlflr Jan 23 11:17:52.917: INFO: Got endpoints: latency-svc-zlflr [2.482699227s] Jan 23 11:17:52.970: INFO: Created: latency-svc-9w8hz Jan 23 11:17:53.152: INFO: Got endpoints: latency-svc-9w8hz [2.283119771s] Jan 23 11:17:53.182: INFO: Created: latency-svc-wxrwg Jan 23 11:17:53.208: INFO: Got endpoints: latency-svc-wxrwg [2.258987093s] Jan 23 11:17:53.360: INFO: Created: latency-svc-7w6t2 Jan 23 11:17:53.382: INFO: Got endpoints: latency-svc-7w6t2 [2.22802731s] Jan 23 11:17:53.433: INFO: Created: latency-svc-ptmrt Jan 23 11:17:53.449: INFO: Got endpoints: latency-svc-ptmrt [2.177885513s] Jan 23 11:17:53.580: INFO: Created: latency-svc-dqqd6 Jan 23 11:17:53.606: INFO: Got endpoints: latency-svc-dqqd6 [2.270159454s] Jan 23 11:17:53.765: INFO: Created: latency-svc-gm829 Jan 23 11:17:53.784: INFO: Got endpoints: latency-svc-gm829 [2.281853584s] Jan 23 11:17:53.868: INFO: Created: latency-svc-mmf2f Jan 23 11:17:53.885: INFO: Got endpoints: latency-svc-mmf2f [2.212757906s] Jan 23 11:17:54.066: INFO: Created: latency-svc-2drp8 Jan 23 11:17:54.126: INFO: Created: latency-svc-w9c54 Jan 23 11:17:54.126: INFO: Got endpoints: latency-svc-2drp8 [2.408880468s] Jan 23 11:17:54.282: INFO: Got endpoints: latency-svc-w9c54 [2.382102859s] Jan 23 11:17:54.301: INFO: Created: latency-svc-qstnp Jan 23 11:17:54.329: INFO: Got endpoints: latency-svc-qstnp [2.215090946s] Jan 23 11:17:54.510: INFO: Created: latency-svc-gp4n4 Jan 23 11:17:54.537: INFO: Got endpoints: latency-svc-gp4n4 [2.169749545s] Jan 23 11:17:54.700: INFO: Created: latency-svc-427m9 Jan 23 11:17:54.709: INFO: Got endpoints: latency-svc-427m9 [2.200271225s] Jan 23 11:17:54.767: INFO: Created: latency-svc-bmznd Jan 23 11:17:54.784: INFO: Got endpoints: latency-svc-bmznd [2.111094115s] Jan 23 11:17:54.887: INFO: Created: latency-svc-6xwpl Jan 23 11:17:54.958: INFO: Got endpoints: latency-svc-6xwpl [2.087792578s] Jan 23 11:17:54.987: INFO: Created: latency-svc-xbsdc Jan 23 11:17:55.121: INFO: Got endpoints: latency-svc-xbsdc [2.203456506s] Jan 23 11:17:55.160: INFO: Created: latency-svc-jf72n Jan 23 11:17:55.190: INFO: Got endpoints: latency-svc-jf72n [2.038077615s] Jan 23 11:17:55.315: INFO: Created: latency-svc-v7fs4 Jan 23 11:17:55.343: INFO: Got endpoints: latency-svc-v7fs4 [2.134762548s] Jan 23 11:17:55.397: INFO: Created: latency-svc-g5frb Jan 23 11:17:55.498: INFO: Got endpoints: latency-svc-g5frb [2.11623193s] Jan 23 11:17:55.515: INFO: Created: latency-svc-j4crc Jan 23 11:17:55.538: INFO: Got endpoints: latency-svc-j4crc [2.0892622s] Jan 23 11:17:55.593: INFO: Created: latency-svc-n47wn Jan 23 11:17:55.705: INFO: Got endpoints: latency-svc-n47wn [2.098781745s] Jan 23 11:17:55.723: INFO: Created: latency-svc-7xt6c Jan 23 11:17:55.734: INFO: Got endpoints: latency-svc-7xt6c [1.949573686s] Jan 23 11:17:55.800: INFO: Created: latency-svc-rssht Jan 23 11:17:55.911: INFO: Got endpoints: latency-svc-rssht [2.026364702s] Jan 23 11:17:55.938: INFO: Created: latency-svc-n4fgd Jan 23 11:17:55.950: INFO: Got endpoints: latency-svc-n4fgd [1.823391298s] Jan 23 11:17:55.989: INFO: Created: latency-svc-7xjmw Jan 23 11:17:56.196: INFO: Got endpoints: latency-svc-7xjmw [1.913349706s] Jan 23 11:17:56.325: INFO: Created: latency-svc-7hgb5 Jan 23 11:17:56.355: INFO: Got endpoints: latency-svc-7hgb5 [2.025965401s] Jan 23 11:17:56.423: INFO: Created: latency-svc-nzb6z Jan 23 11:17:56.562: INFO: Got endpoints: latency-svc-nzb6z [2.023615539s] Jan 23 11:17:56.648: INFO: Created: latency-svc-9ptgs Jan 23 11:17:56.720: INFO: Got endpoints: latency-svc-9ptgs [2.011369085s] Jan 23 11:17:56.761: INFO: Created: latency-svc-z9q25 Jan 23 11:17:56.768: INFO: Got endpoints: latency-svc-z9q25 [1.983582474s] Jan 23 11:17:56.914: INFO: Created: latency-svc-s2w7k Jan 23 11:17:56.942: INFO: Got endpoints: latency-svc-s2w7k [1.983742124s] Jan 23 11:17:56.988: INFO: Created: latency-svc-gnvhf Jan 23 11:17:57.086: INFO: Got endpoints: latency-svc-gnvhf [1.964285611s] Jan 23 11:17:57.123: INFO: Created: latency-svc-wxxfx Jan 23 11:17:57.197: INFO: Created: latency-svc-jkjlb Jan 23 11:17:57.365: INFO: Got endpoints: latency-svc-wxxfx [2.174653971s] Jan 23 11:17:57.446: INFO: Created: latency-svc-85pwz Jan 23 11:17:57.447: INFO: Got endpoints: latency-svc-jkjlb [2.104421237s] Jan 23 11:17:57.456: INFO: Got endpoints: latency-svc-85pwz [1.957729453s] Jan 23 11:17:57.586: INFO: Created: latency-svc-6kn77 Jan 23 11:17:57.605: INFO: Got endpoints: latency-svc-6kn77 [2.066369055s] Jan 23 11:17:57.730: INFO: Created: latency-svc-k58ft Jan 23 11:17:57.746: INFO: Got endpoints: latency-svc-k58ft [2.040922033s] Jan 23 11:17:57.833: INFO: Created: latency-svc-tf5ph Jan 23 11:17:57.902: INFO: Got endpoints: latency-svc-tf5ph [2.168389761s] Jan 23 11:17:57.996: INFO: Created: latency-svc-pzgfr Jan 23 11:17:58.138: INFO: Got endpoints: latency-svc-pzgfr [2.226944389s] Jan 23 11:17:58.186: INFO: Created: latency-svc-44svw Jan 23 11:17:58.193: INFO: Got endpoints: latency-svc-44svw [2.242439678s] Jan 23 11:17:58.428: INFO: Created: latency-svc-h9bmw Jan 23 11:17:58.431: INFO: Got endpoints: latency-svc-h9bmw [2.235302388s] Jan 23 11:17:58.704: INFO: Created: latency-svc-xjv7w Jan 23 11:17:58.730: INFO: Got endpoints: latency-svc-xjv7w [2.374508494s] Jan 23 11:17:58.896: INFO: Created: latency-svc-tvt8r Jan 23 11:17:58.921: INFO: Got endpoints: latency-svc-tvt8r [2.359130846s] Jan 23 11:17:59.065: INFO: Created: latency-svc-qhz62 Jan 23 11:17:59.077: INFO: Got endpoints: latency-svc-qhz62 [2.356818988s] Jan 23 11:17:59.119: INFO: Created: latency-svc-sdrl2 Jan 23 11:17:59.144: INFO: Got endpoints: latency-svc-sdrl2 [2.37611482s] Jan 23 11:17:59.283: INFO: Created: latency-svc-jrp8z Jan 23 11:17:59.348: INFO: Got endpoints: latency-svc-jrp8z [2.405461645s] Jan 23 11:17:59.476: INFO: Created: latency-svc-w5thd Jan 23 11:17:59.488: INFO: Got endpoints: latency-svc-w5thd [2.401944804s] Jan 23 11:17:59.544: INFO: Created: latency-svc-db92g Jan 23 11:17:59.556: INFO: Got endpoints: latency-svc-db92g [2.190517905s] Jan 23 11:17:59.677: INFO: Created: latency-svc-fks67 Jan 23 11:17:59.720: INFO: Got endpoints: latency-svc-fks67 [2.272596665s] Jan 23 11:17:59.739: INFO: Created: latency-svc-zhtlv Jan 23 11:17:59.747: INFO: Got endpoints: latency-svc-zhtlv [2.290467837s] Jan 23 11:17:59.948: INFO: Created: latency-svc-2knnf Jan 23 11:17:59.966: INFO: Got endpoints: latency-svc-2knnf [2.361159672s] Jan 23 11:18:00.007: INFO: Created: latency-svc-fgpw5 Jan 23 11:18:00.015: INFO: Got endpoints: latency-svc-fgpw5 [2.268657017s] Jan 23 11:18:00.109: INFO: Created: latency-svc-ff96z Jan 23 11:18:00.115: INFO: Got endpoints: latency-svc-ff96z [2.212312944s] Jan 23 11:18:00.167: INFO: Created: latency-svc-jhz7h Jan 23 11:18:00.187: INFO: Got endpoints: latency-svc-jhz7h [2.04799713s] Jan 23 11:18:00.307: INFO: Created: latency-svc-zk5td Jan 23 11:18:00.337: INFO: Got endpoints: latency-svc-zk5td [2.143798833s] Jan 23 11:18:00.346: INFO: Created: latency-svc-rxpn2 Jan 23 11:18:00.357: INFO: Got endpoints: latency-svc-rxpn2 [1.92523955s] Jan 23 11:18:00.509: INFO: Created: latency-svc-sjfbb Jan 23 11:18:00.531: INFO: Got endpoints: latency-svc-sjfbb [1.800647704s] Jan 23 11:18:00.601: INFO: Created: latency-svc-2hpzp Jan 23 11:18:00.737: INFO: Got endpoints: latency-svc-2hpzp [1.815821401s] Jan 23 11:18:00.757: INFO: Created: latency-svc-rfnsf Jan 23 11:18:00.766: INFO: Got endpoints: latency-svc-rfnsf [1.688618029s] Jan 23 11:18:00.816: INFO: Created: latency-svc-czgsb Jan 23 11:18:00.916: INFO: Got endpoints: latency-svc-czgsb [1.771917272s] Jan 23 11:18:00.931: INFO: Created: latency-svc-gzfhd Jan 23 11:18:00.940: INFO: Got endpoints: latency-svc-gzfhd [1.592623946s] Jan 23 11:18:00.993: INFO: Created: latency-svc-654dc Jan 23 11:18:01.140: INFO: Got endpoints: latency-svc-654dc [1.651537699s] Jan 23 11:18:01.155: INFO: Created: latency-svc-9g7bz Jan 23 11:18:01.174: INFO: Got endpoints: latency-svc-9g7bz [1.617051316s] Jan 23 11:18:01.222: INFO: Created: latency-svc-vx45g Jan 23 11:18:01.252: INFO: Got endpoints: latency-svc-vx45g [1.531804397s] Jan 23 11:18:01.422: INFO: Created: latency-svc-4xwh2 Jan 23 11:18:01.579: INFO: Got endpoints: latency-svc-4xwh2 [1.831771288s] Jan 23 11:18:01.672: INFO: Created: latency-svc-26lhw Jan 23 11:18:01.672: INFO: Got endpoints: latency-svc-26lhw [1.705230052s] Jan 23 11:18:01.825: INFO: Created: latency-svc-5qnk2 Jan 23 11:18:01.839: INFO: Got endpoints: latency-svc-5qnk2 [1.824089624s] Jan 23 11:18:01.883: INFO: Created: latency-svc-mwprs Jan 23 11:18:02.045: INFO: Got endpoints: latency-svc-mwprs [1.930163526s] Jan 23 11:18:02.068: INFO: Created: latency-svc-5tqd5 Jan 23 11:18:02.096: INFO: Got endpoints: latency-svc-5tqd5 [1.908530485s] Jan 23 11:18:02.144: INFO: Created: latency-svc-6wgbt Jan 23 11:18:02.289: INFO: Got endpoints: latency-svc-6wgbt [1.952448326s] Jan 23 11:18:02.346: INFO: Created: latency-svc-jzqk9 Jan 23 11:18:02.366: INFO: Got endpoints: latency-svc-jzqk9 [2.009031554s] Jan 23 11:18:02.574: INFO: Created: latency-svc-l9tns Jan 23 11:18:02.605: INFO: Got endpoints: latency-svc-l9tns [2.074335674s] Jan 23 11:18:02.766: INFO: Created: latency-svc-dqnx5 Jan 23 11:18:02.776: INFO: Got endpoints: latency-svc-dqnx5 [2.039325094s] Jan 23 11:18:02.817: INFO: Created: latency-svc-l6zmg Jan 23 11:18:02.851: INFO: Got endpoints: latency-svc-l6zmg [2.084742143s] Jan 23 11:18:02.985: INFO: Created: latency-svc-jklbd Jan 23 11:18:02.995: INFO: Got endpoints: latency-svc-jklbd [2.078810628s] Jan 23 11:18:03.048: INFO: Created: latency-svc-5njzf Jan 23 11:18:03.145: INFO: Got endpoints: latency-svc-5njzf [2.204756769s] Jan 23 11:18:03.161: INFO: Created: latency-svc-zwqtc Jan 23 11:18:03.186: INFO: Got endpoints: latency-svc-zwqtc [2.045660301s] Jan 23 11:18:03.247: INFO: Created: latency-svc-xn9k2 Jan 23 11:18:03.412: INFO: Got endpoints: latency-svc-xn9k2 [2.238005661s] Jan 23 11:18:03.455: INFO: Created: latency-svc-wxjv9 Jan 23 11:18:03.632: INFO: Got endpoints: latency-svc-wxjv9 [2.38033649s] Jan 23 11:18:03.635: INFO: Created: latency-svc-gvmhz Jan 23 11:18:03.654: INFO: Got endpoints: latency-svc-gvmhz [2.074109695s] Jan 23 11:18:03.727: INFO: Created: latency-svc-k425r Jan 23 11:18:03.867: INFO: Got endpoints: latency-svc-k425r [2.194945329s] Jan 23 11:18:04.036: INFO: Created: latency-svc-zjpcn Jan 23 11:18:05.128: INFO: Got endpoints: latency-svc-zjpcn [3.288601172s] Jan 23 11:18:05.329: INFO: Created: latency-svc-m5r94 Jan 23 11:18:05.343: INFO: Got endpoints: latency-svc-m5r94 [3.297761375s] Jan 23 11:18:05.420: INFO: Created: latency-svc-krndd Jan 23 11:18:05.581: INFO: Created: latency-svc-vfql9 Jan 23 11:18:05.604: INFO: Got endpoints: latency-svc-krndd [3.507963836s] Jan 23 11:18:05.605: INFO: Got endpoints: latency-svc-vfql9 [3.315605089s] Jan 23 11:18:05.655: INFO: Created: latency-svc-wmdw2 Jan 23 11:18:05.836: INFO: Got endpoints: latency-svc-wmdw2 [3.47034521s] Jan 23 11:18:05.897: INFO: Created: latency-svc-clkqq Jan 23 11:18:05.897: INFO: Created: latency-svc-4l5jg Jan 23 11:18:05.918: INFO: Got endpoints: latency-svc-4l5jg [3.312171574s] Jan 23 11:18:05.933: INFO: Got endpoints: latency-svc-clkqq [3.156148352s] Jan 23 11:18:06.025: INFO: Created: latency-svc-hxg5c Jan 23 11:18:06.035: INFO: Got endpoints: latency-svc-hxg5c [3.184422325s] Jan 23 11:18:06.095: INFO: Created: latency-svc-s4p8f Jan 23 11:18:06.219: INFO: Got endpoints: latency-svc-s4p8f [3.224283326s] Jan 23 11:18:06.250: INFO: Created: latency-svc-9fd6m Jan 23 11:18:06.269: INFO: Got endpoints: latency-svc-9fd6m [3.123273358s] Jan 23 11:18:06.425: INFO: Created: latency-svc-nrjln Jan 23 11:18:06.519: INFO: Created: latency-svc-m5d54 Jan 23 11:18:06.701: INFO: Got endpoints: latency-svc-nrjln [3.515497988s] Jan 23 11:18:06.743: INFO: Created: latency-svc-lrb45 Jan 23 11:18:06.793: INFO: Got endpoints: latency-svc-m5d54 [3.380564381s] Jan 23 11:18:06.795: INFO: Got endpoints: latency-svc-lrb45 [3.162665401s] Jan 23 11:18:06.925: INFO: Created: latency-svc-j2gf7 Jan 23 11:18:06.955: INFO: Got endpoints: latency-svc-j2gf7 [3.30142123s] Jan 23 11:18:07.012: INFO: Created: latency-svc-s2lfn Jan 23 11:18:07.086: INFO: Got endpoints: latency-svc-s2lfn [3.218318846s] Jan 23 11:18:07.100: INFO: Created: latency-svc-96dgm Jan 23 11:18:07.108: INFO: Got endpoints: latency-svc-96dgm [1.980182058s] Jan 23 11:18:07.143: INFO: Created: latency-svc-bb2t4 Jan 23 11:18:07.155: INFO: Got endpoints: latency-svc-bb2t4 [1.811034615s] Jan 23 11:18:07.317: INFO: Created: latency-svc-9bv99 Jan 23 11:18:07.328: INFO: Got endpoints: latency-svc-9bv99 [1.723981637s] Jan 23 11:18:07.373: INFO: Created: latency-svc-twl4n Jan 23 11:18:07.401: INFO: Got endpoints: latency-svc-twl4n [1.796205765s] Jan 23 11:18:07.522: INFO: Created: latency-svc-vssw4 Jan 23 11:18:07.535: INFO: Got endpoints: latency-svc-vssw4 [1.698513903s] Jan 23 11:18:07.579: INFO: Created: latency-svc-4thgq Jan 23 11:18:07.600: INFO: Got endpoints: latency-svc-4thgq [1.681769012s] Jan 23 11:18:07.712: INFO: Created: latency-svc-xsnst Jan 23 11:18:07.724: INFO: Got endpoints: latency-svc-xsnst [1.790764748s] Jan 23 11:18:07.763: INFO: Created: latency-svc-zktff Jan 23 11:18:07.775: INFO: Got endpoints: latency-svc-zktff [1.739581211s] Jan 23 11:18:07.775: INFO: Latencies: [257.884464ms 350.193296ms 394.787557ms 563.100534ms 616.946274ms 765.847554ms 809.407066ms 1.104574737s 1.122020993s 1.408715321s 1.531804397s 1.592623946s 1.617051316s 1.635784216s 1.651537699s 1.681769012s 1.688618029s 1.698513903s 1.705230052s 1.71018293s 1.723981637s 1.739581211s 1.771917272s 1.790764748s 1.796205765s 1.800647704s 1.811034615s 1.815821401s 1.823391298s 1.824089624s 1.831771288s 1.908530485s 1.913349706s 1.92523955s 1.930163526s 1.949573686s 1.952448326s 1.957729453s 1.964285611s 1.965529483s 1.980182058s 1.983582474s 1.983742124s 2.009031554s 2.011369085s 2.018737832s 2.023615539s 2.025965401s 2.026364702s 2.038077615s 2.039325094s 2.040922033s 2.045660301s 2.04799713s 2.057498988s 2.066369055s 2.07261119s 2.074109695s 2.074335674s 2.078810628s 2.084742143s 2.087792578s 2.0892622s 2.098781745s 2.104421237s 2.109647672s 2.111094115s 2.11623193s 2.119808767s 2.134762548s 2.143798833s 2.149384701s 2.168389761s 2.169749545s 2.174653971s 2.177885513s 2.190517905s 2.194945329s 2.200271225s 2.203456506s 2.204756769s 2.205432755s 2.212312944s 2.212757906s 2.213327589s 2.215090946s 2.2217917s 2.226944389s 2.22802731s 2.235302388s 2.238005661s 2.242439678s 2.258987093s 2.268657017s 2.270159454s 2.272596665s 2.281853584s 2.283119771s 2.284850658s 2.288152173s 2.290467837s 2.30422201s 2.312615977s 2.319058041s 2.324309016s 2.326999374s 2.33298108s 2.342048083s 2.345565269s 2.3557244s 2.356818988s 2.359130846s 2.361159672s 2.374508494s 2.37611482s 2.376515618s 2.38033649s 2.380668843s 2.382102859s 2.386946277s 2.393693732s 2.401944804s 2.402741068s 2.405461645s 2.408687322s 2.408880468s 2.423180345s 2.436053502s 2.436832584s 2.442410424s 2.459767874s 2.463379412s 2.482699227s 2.4843604s 2.495273128s 2.508283175s 2.53134407s 2.538396013s 2.538768711s 2.544475885s 2.557641959s 2.560182007s 2.570532525s 2.572645384s 2.599260402s 2.60014215s 2.602265994s 2.61824658s 2.618332655s 2.654741926s 2.660403441s 2.666080318s 2.674707277s 2.682464299s 2.702976844s 2.70457689s 2.705296123s 2.711138198s 2.7176649s 2.746461664s 2.758335095s 2.760689988s 2.786865612s 2.792472291s 2.79533067s 2.813527021s 2.815035086s 2.860518889s 2.89578794s 3.066901191s 3.123273358s 3.124745551s 3.156148352s 3.162665401s 3.180924198s 3.184422325s 3.218318846s 3.224283326s 3.269522251s 3.288601172s 3.297761375s 3.30142123s 3.311327351s 3.312171574s 3.315605089s 3.326102762s 3.333824427s 3.343924959s 3.368819684s 3.373435849s 3.378905642s 3.380335841s 3.380564381s 3.388995041s 3.389034048s 3.398645578s 3.47034521s 3.507963836s 3.515497988s 3.565170445s] Jan 23 11:18:07.776: INFO: 50 %ile: 2.290467837s Jan 23 11:18:07.776: INFO: 90 %ile: 3.297761375s Jan 23 11:18:07.776: INFO: 99 %ile: 3.515497988s Jan 23 11:18:07.776: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:18:07.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-4hj6l" for this suite. Jan 23 11:18:57.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:18:57.990: INFO: namespace: e2e-tests-svc-latency-4hj6l, resource: bindings, ignored listing per whitelist Jan 23 11:18:58.009: INFO: namespace e2e-tests-svc-latency-4hj6l deletion completed in 50.226072137s • [SLOW TEST:90.853 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:18:58.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-kll7b [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 23 11:18:58.215: INFO: Found 0 stateful pods, waiting for 3 Jan 23 11:19:08.234: INFO: Found 2 stateful pods, waiting for 3 Jan 23 11:19:18.230: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:19:18.230: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:19:18.230: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 11:19:28.234: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:19:28.234: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:19:28.234: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 23 11:19:28.311: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 23 11:19:38.489: INFO: Updating stateful set ss2 Jan 23 11:19:38.591: INFO: Waiting for Pod e2e-tests-statefulset-kll7b/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 23 11:19:48.913: INFO: Found 1 stateful pods, waiting for 3 Jan 23 11:19:58.996: INFO: Found 2 stateful pods, waiting for 3 Jan 23 11:20:08.943: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:20:08.943: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:20:08.943: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jan 23 11:20:18.947: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:20:18.948: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:20:18.948: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 23 11:20:19.033: INFO: Updating stateful set ss2 Jan 23 11:20:19.087: INFO: Waiting for Pod e2e-tests-statefulset-kll7b/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:20:30.292: INFO: Updating stateful set ss2 Jan 23 11:20:30.354: INFO: Waiting for StatefulSet e2e-tests-statefulset-kll7b/ss2 to complete update Jan 23 11:20:30.354: INFO: Waiting for Pod e2e-tests-statefulset-kll7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:20:40.375: INFO: Waiting for StatefulSet e2e-tests-statefulset-kll7b/ss2 to complete update Jan 23 11:20:40.375: INFO: Waiting for Pod e2e-tests-statefulset-kll7b/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:20:50.366: INFO: Waiting for StatefulSet e2e-tests-statefulset-kll7b/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 23 11:21:00.382: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kll7b Jan 23 11:21:00.388: INFO: Scaling statefulset ss2 to 0 Jan 23 11:21:40.495: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 11:21:40.509: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:21:40.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-kll7b" for this suite. Jan 23 11:21:48.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:21:48.951: INFO: namespace: e2e-tests-statefulset-kll7b, resource: bindings, ignored listing per whitelist Jan 23 11:21:49.040: INFO: namespace e2e-tests-statefulset-kll7b deletion completed in 8.255096895s • [SLOW TEST:171.030 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:21:49.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-g854 STEP: Creating a pod to test atomic-volume-subpath Jan 23 11:21:49.342: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-g854" in namespace "e2e-tests-subpath-ff2vx" to be "success or failure" Jan 23 11:21:49.351: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 9.609023ms Jan 23 11:21:51.419: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077047194s Jan 23 11:21:53.434: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091959523s Jan 23 11:21:55.467: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125308085s Jan 23 11:21:57.486: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144031218s Jan 23 11:21:59.508: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166209603s Jan 23 11:22:01.522: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 12.180074937s Jan 23 11:22:04.007: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 14.665298868s Jan 23 11:22:06.019: INFO: Pod "pod-subpath-test-projected-g854": Phase="Pending", Reason="", readiness=false. Elapsed: 16.677317941s Jan 23 11:22:08.036: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 18.694116148s Jan 23 11:22:10.055: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 20.713135258s Jan 23 11:22:12.081: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 22.739039459s Jan 23 11:22:14.116: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 24.774081576s Jan 23 11:22:16.135: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 26.792825084s Jan 23 11:22:18.153: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 28.811152264s Jan 23 11:22:20.183: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 30.841249131s Jan 23 11:22:22.216: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 32.874501223s Jan 23 11:22:24.232: INFO: Pod "pod-subpath-test-projected-g854": Phase="Running", Reason="", readiness=false. Elapsed: 34.889945184s Jan 23 11:22:26.256: INFO: Pod "pod-subpath-test-projected-g854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.914217743s STEP: Saw pod success Jan 23 11:22:26.256: INFO: Pod "pod-subpath-test-projected-g854" satisfied condition "success or failure" Jan 23 11:22:26.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-g854 container test-container-subpath-projected-g854: STEP: delete the pod Jan 23 11:22:26.438: INFO: Waiting for pod pod-subpath-test-projected-g854 to disappear Jan 23 11:22:26.452: INFO: Pod pod-subpath-test-projected-g854 no longer exists STEP: Deleting pod pod-subpath-test-projected-g854 Jan 23 11:22:26.452: INFO: Deleting pod "pod-subpath-test-projected-g854" in namespace "e2e-tests-subpath-ff2vx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:22:26.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ff2vx" for this suite. Jan 23 11:22:32.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:22:32.646: INFO: namespace: e2e-tests-subpath-ff2vx, resource: bindings, ignored listing per whitelist Jan 23 11:22:32.677: INFO: namespace e2e-tests-subpath-ff2vx deletion completed in 6.17309996s • [SLOW TEST:43.637 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:22:32.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:22:32.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-w8j9z" for this suite. Jan 23 11:22:57.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:22:57.248: INFO: namespace: e2e-tests-pods-w8j9z, resource: bindings, ignored listing per whitelist Jan 23 11:22:57.308: INFO: namespace e2e-tests-pods-w8j9z deletion completed in 24.240502929s • [SLOW TEST:24.631 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:22:57.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 23 11:22:57.578: INFO: Waiting up to 5m0s for pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-hvw8x" to be "success or failure" Jan 23 11:22:57.598: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.560345ms Jan 23 11:22:59.615: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037789157s Jan 23 11:23:01.625: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046921012s Jan 23 11:23:04.147: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569150681s Jan 23 11:23:06.443: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865023112s Jan 23 11:23:08.455: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.877461878s STEP: Saw pod success Jan 23 11:23:08.455: INFO: Pod "pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:23:08.462: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:23:08.791: INFO: Waiting for pod pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005 to disappear Jan 23 11:23:08.811: INFO: Pod pod-b4e56ce4-3dd2-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:23:08.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hvw8x" for this suite. Jan 23 11:23:16.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:23:16.291: INFO: namespace: e2e-tests-emptydir-hvw8x, resource: bindings, ignored listing per whitelist Jan 23 11:23:16.548: INFO: namespace e2e-tests-emptydir-hvw8x deletion completed in 7.719005234s • [SLOW TEST:19.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:23:16.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 23 11:26:19.348: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:19.420: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:21.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:21.438: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:23.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:23.433: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:25.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:25.443: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:27.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:27.455: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:29.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:29.464: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:31.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:31.480: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:33.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:33.432: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:35.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:35.539: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:37.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:37.432: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:39.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:39.436: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:41.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:41.434: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:43.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:43.435: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:45.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:45.435: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:47.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:47.436: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:49.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:50.044: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:51.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:51.442: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:53.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:53.436: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:55.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:55.448: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:57.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:57.435: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:26:59.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:26:59.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:01.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:01.445: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:03.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:03.433: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:05.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:05.439: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:07.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:07.438: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:09.421: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:09.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:11.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:11.453: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:13.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:13.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:15.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:15.442: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:17.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:17.446: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:19.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:19.440: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:21.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:21.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:23.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:23.434: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:25.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:25.438: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:27.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:27.438: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:29.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:29.442: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:31.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:31.437: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:33.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:33.440: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:35.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:35.444: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:37.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:37.441: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:39.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:39.500: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:41.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:41.494: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:43.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:43.489: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:45.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:45.439: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:47.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:47.440: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:49.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:49.436: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:51.420: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:51.441: INFO: Pod pod-with-poststart-exec-hook still exists Jan 23 11:27:53.421: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 23 11:27:53.446: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:27:53.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cp2xs" for this suite. Jan 23 11:28:17.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:28:17.644: INFO: namespace: e2e-tests-container-lifecycle-hook-cp2xs, resource: bindings, ignored listing per whitelist Jan 23 11:28:17.702: INFO: namespace e2e-tests-container-lifecycle-hook-cp2xs deletion completed in 24.245382932s • [SLOW TEST:301.153 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:28:17.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 23 11:28:17.881: INFO: Waiting up to 5m0s for pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-9qtgc" to be "success or failure" Jan 23 11:28:17.891: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177657ms Jan 23 11:28:19.908: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026488315s Jan 23 11:28:21.923: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041497474s Jan 23 11:28:24.184: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302469301s Jan 23 11:28:26.201: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.319762781s Jan 23 11:28:28.214: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332680545s STEP: Saw pod success Jan 23 11:28:28.214: INFO: Pod "downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:28:28.220: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 11:28:28.599: INFO: Waiting for pod downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005 to disappear Jan 23 11:28:28.629: INFO: Pod downward-api-73d01d19-3dd3-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:28:28.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9qtgc" for this suite. Jan 23 11:28:34.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:28:34.862: INFO: namespace: e2e-tests-downward-api-9qtgc, resource: bindings, ignored listing per whitelist Jan 23 11:28:34.961: INFO: namespace e2e-tests-downward-api-9qtgc deletion completed in 6.185584678s • [SLOW TEST:17.258 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:28:34.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:29:30.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-7jjj9" for this suite. Jan 23 11:29:38.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:29:38.581: INFO: namespace: e2e-tests-container-runtime-7jjj9, resource: bindings, ignored listing per whitelist Jan 23 11:29:38.649: INFO: namespace e2e-tests-container-runtime-7jjj9 deletion completed in 8.345054675s • [SLOW TEST:63.687 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:29:38.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-a41cd562-3dd3-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:29:38.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-9gktn" to be "success or failure" Jan 23 11:29:38.939: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.618533ms Jan 23 11:29:40.970: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047830069s Jan 23 11:29:42.988: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065006614s Jan 23 11:29:45.049: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125880921s Jan 23 11:29:47.073: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149907433s STEP: Saw pod success Jan 23 11:29:47.073: INFO: Pod "pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:29:47.081: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 23 11:29:47.359: INFO: Waiting for pod pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005 to disappear Jan 23 11:29:47.369: INFO: Pod pod-configmaps-a41d760f-3dd3-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:29:47.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9gktn" for this suite. Jan 23 11:29:53.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:29:53.481: INFO: namespace: e2e-tests-configmap-9gktn, resource: bindings, ignored listing per whitelist Jan 23 11:29:53.542: INFO: namespace e2e-tests-configmap-9gktn deletion completed in 6.166011473s • [SLOW TEST:14.892 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:29:53.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 23 11:29:53.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181028,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 11:29:53.793: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181028,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 23 11:30:03.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181041,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 23 11:30:03.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181041,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 23 11:30:13.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181054,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 11:30:13.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181054,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 23 11:30:23.940: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181067,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 11:30:23.941: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-a,UID:acfc20e5-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181067,Generation:0,CreationTimestamp:2020-01-23 11:29:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 23 11:30:33.973: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-b,UID:c4ed7cb7-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181079,Generation:0,CreationTimestamp:2020-01-23 11:30:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 11:30:33.974: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-b,UID:c4ed7cb7-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181079,Generation:0,CreationTimestamp:2020-01-23 11:30:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 23 11:30:43.993: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-b,UID:c4ed7cb7-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181092,Generation:0,CreationTimestamp:2020-01-23 11:30:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 11:30:43.994: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8b87r,SelfLink:/api/v1/namespaces/e2e-tests-watch-8b87r/configmaps/e2e-watch-test-configmap-b,UID:c4ed7cb7-3dd3-11ea-a994-fa163e34d433,ResourceVersion:19181092,Generation:0,CreationTimestamp:2020-01-23 11:30:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:30:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8b87r" for this suite. Jan 23 11:31:00.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:31:00.195: INFO: namespace: e2e-tests-watch-8b87r, resource: bindings, ignored listing per whitelist Jan 23 11:31:00.243: INFO: namespace e2e-tests-watch-8b87r deletion completed in 6.235989859s • [SLOW TEST:66.701 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:31:00.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:31:00.427: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:31:01.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-rh4ts" for this suite. Jan 23 11:31:07.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:31:07.805: INFO: namespace: e2e-tests-custom-resource-definition-rh4ts, resource: bindings, ignored listing per whitelist Jan 23 11:31:07.903: INFO: namespace e2e-tests-custom-resource-definition-rh4ts deletion completed in 6.230769157s • [SLOW TEST:7.660 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:31:07.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 23 11:31:08.675: INFO: Waiting up to 5m0s for pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg" in namespace "e2e-tests-svcaccounts-c6c5t" to be "success or failure" Jan 23 11:31:08.708: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 33.132567ms Jan 23 11:31:10.722: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047104294s Jan 23 11:31:12.733: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058548908s Jan 23 11:31:14.747: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072481287s Jan 23 11:31:16.779: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103923931s Jan 23 11:31:19.153: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.478522088s Jan 23 11:31:21.166: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.491048875s Jan 23 11:31:23.184: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Running", Reason="", readiness=false. Elapsed: 14.508999931s Jan 23 11:31:25.311: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.636558008s STEP: Saw pod success Jan 23 11:31:25.312: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg" satisfied condition "success or failure" Jan 23 11:31:25.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg container token-test: STEP: delete the pod Jan 23 11:31:25.722: INFO: Waiting for pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg to disappear Jan 23 11:31:25.735: INFO: Pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-p26sg no longer exists STEP: Creating a pod to test consume service account root CA Jan 23 11:31:25.757: INFO: Waiting up to 5m0s for pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8" in namespace "e2e-tests-svcaccounts-c6c5t" to be "success or failure" Jan 23 11:31:25.778: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.049678ms Jan 23 11:31:27.795: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038457654s Jan 23 11:31:29.808: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051371467s Jan 23 11:31:31.821: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064009041s Jan 23 11:31:34.322: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565559737s Jan 23 11:31:36.443: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685700972s Jan 23 11:31:38.476: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719001506s Jan 23 11:31:40.515: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.757801041s STEP: Saw pod success Jan 23 11:31:40.515: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8" satisfied condition "success or failure" Jan 23 11:31:40.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8 container root-ca-test: STEP: delete the pod Jan 23 11:31:40.977: INFO: Waiting for pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8 to disappear Jan 23 11:31:41.182: INFO: Pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-fggz8 no longer exists STEP: Creating a pod to test consume service account namespace Jan 23 11:31:41.273: INFO: Waiting up to 5m0s for pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr" in namespace "e2e-tests-svcaccounts-c6c5t" to be "success or failure" Jan 23 11:31:41.293: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 20.176293ms Jan 23 11:31:43.307: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03456172s Jan 23 11:31:45.353: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080300058s Jan 23 11:31:47.368: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095288533s Jan 23 11:31:49.373: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100086559s Jan 23 11:31:51.470: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196675619s Jan 23 11:31:53.493: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.219731244s Jan 23 11:31:56.118: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.845395609s Jan 23 11:31:58.156: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.882741456s STEP: Saw pod success Jan 23 11:31:58.156: INFO: Pod "pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr" satisfied condition "success or failure" Jan 23 11:31:58.164: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr container namespace-test: STEP: delete the pod Jan 23 11:31:58.415: INFO: Waiting for pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr to disappear Jan 23 11:31:58.438: INFO: Pod pod-service-account-d99ab70b-3dd3-11ea-bb65-0242ac110005-6qflr no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:31:58.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-c6c5t" for this suite. Jan 23 11:32:06.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:32:06.761: INFO: namespace: e2e-tests-svcaccounts-c6c5t, resource: bindings, ignored listing per whitelist Jan 23 11:32:06.770: INFO: namespace e2e-tests-svcaccounts-c6c5t deletion completed in 8.298480888s • [SLOW TEST:58.866 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:32:06.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 23 11:32:06.992: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 23 11:32:12.030: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:32:14.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-9tzw2" for this suite. Jan 23 11:32:25.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:32:25.179: INFO: namespace: e2e-tests-replication-controller-9tzw2, resource: bindings, ignored listing per whitelist Jan 23 11:32:25.234: INFO: namespace e2e-tests-replication-controller-9tzw2 deletion completed in 10.993127147s • [SLOW TEST:18.464 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:32:25.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-26spf [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 23 11:32:26.352: INFO: Found 0 stateful pods, waiting for 3 Jan 23 11:32:36.425: INFO: Found 1 stateful pods, waiting for 3 Jan 23 11:32:46.369: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:32:46.369: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:32:46.369: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 11:32:56.377: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:32:56.377: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:32:56.377: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 23 11:32:56.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26spf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 11:32:57.030: INFO: stderr: "I0123 11:32:56.697246 329 log.go:172] (0xc0001380b0) (0xc0006a2000) Create stream\nI0123 11:32:56.697464 329 log.go:172] (0xc0001380b0) (0xc0006a2000) Stream added, broadcasting: 1\nI0123 11:32:56.702322 329 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0123 11:32:56.702366 329 log.go:172] (0xc0001380b0) (0xc000632c80) Create stream\nI0123 11:32:56.702378 329 log.go:172] (0xc0001380b0) (0xc000632c80) Stream added, broadcasting: 3\nI0123 11:32:56.703345 329 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0123 11:32:56.703394 329 log.go:172] (0xc0001380b0) (0xc0007dc000) Create stream\nI0123 11:32:56.703403 329 log.go:172] (0xc0001380b0) (0xc0007dc000) Stream added, broadcasting: 5\nI0123 11:32:56.704504 329 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0123 11:32:56.889469 329 log.go:172] (0xc0001380b0) Data frame received for 3\nI0123 11:32:56.889601 329 log.go:172] (0xc000632c80) (3) Data frame handling\nI0123 11:32:56.889662 329 log.go:172] (0xc000632c80) (3) Data frame sent\nI0123 11:32:57.015412 329 log.go:172] (0xc0001380b0) Data frame received for 1\nI0123 11:32:57.015493 329 log.go:172] (0xc0006a2000) (1) Data frame handling\nI0123 11:32:57.015556 329 log.go:172] (0xc0006a2000) (1) Data frame sent\nI0123 11:32:57.016060 329 log.go:172] (0xc0001380b0) (0xc000632c80) Stream removed, broadcasting: 3\nI0123 11:32:57.016123 329 log.go:172] (0xc0001380b0) (0xc0006a2000) Stream removed, broadcasting: 1\nI0123 11:32:57.016667 329 log.go:172] (0xc0001380b0) (0xc0007dc000) Stream removed, broadcasting: 5\nI0123 11:32:57.016931 329 log.go:172] (0xc0001380b0) (0xc0006a2000) Stream removed, broadcasting: 1\nI0123 11:32:57.016956 329 log.go:172] (0xc0001380b0) (0xc000632c80) Stream removed, broadcasting: 3\nI0123 11:32:57.016972 329 log.go:172] (0xc0001380b0) (0xc0007dc000) Stream removed, broadcasting: 5\nI0123 11:32:57.017253 329 log.go:172] (0xc0001380b0) Go away received\n" Jan 23 11:32:57.030: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 11:32:57.030: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 23 11:33:07.130: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 23 11:33:17.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26spf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 11:33:17.970: INFO: stderr: "I0123 11:33:17.462448 351 log.go:172] (0xc0006e82c0) (0xc000712640) Create stream\nI0123 11:33:17.462718 351 log.go:172] (0xc0006e82c0) (0xc000712640) Stream added, broadcasting: 1\nI0123 11:33:17.468441 351 log.go:172] (0xc0006e82c0) Reply frame received for 1\nI0123 11:33:17.468477 351 log.go:172] (0xc0006e82c0) (0xc000658b40) Create stream\nI0123 11:33:17.468488 351 log.go:172] (0xc0006e82c0) (0xc000658b40) Stream added, broadcasting: 3\nI0123 11:33:17.469730 351 log.go:172] (0xc0006e82c0) Reply frame received for 3\nI0123 11:33:17.469767 351 log.go:172] (0xc0006e82c0) (0xc0007126e0) Create stream\nI0123 11:33:17.469783 351 log.go:172] (0xc0006e82c0) (0xc0007126e0) Stream added, broadcasting: 5\nI0123 11:33:17.470829 351 log.go:172] (0xc0006e82c0) Reply frame received for 5\nI0123 11:33:17.607153 351 log.go:172] (0xc0006e82c0) Data frame received for 3\nI0123 11:33:17.607280 351 log.go:172] (0xc000658b40) (3) Data frame handling\nI0123 11:33:17.607340 351 log.go:172] (0xc000658b40) (3) Data frame sent\nI0123 11:33:17.948485 351 log.go:172] (0xc0006e82c0) Data frame received for 1\nI0123 11:33:17.948623 351 log.go:172] (0xc000712640) (1) Data frame handling\nI0123 11:33:17.948680 351 log.go:172] (0xc000712640) (1) Data frame sent\nI0123 11:33:17.949480 351 log.go:172] (0xc0006e82c0) (0xc000712640) Stream removed, broadcasting: 1\nI0123 11:33:17.949718 351 log.go:172] (0xc0006e82c0) (0xc0007126e0) Stream removed, broadcasting: 5\nI0123 11:33:17.949993 351 log.go:172] (0xc0006e82c0) (0xc000658b40) Stream removed, broadcasting: 3\nI0123 11:33:17.950194 351 log.go:172] (0xc0006e82c0) (0xc000712640) Stream removed, broadcasting: 1\nI0123 11:33:17.950246 351 log.go:172] (0xc0006e82c0) (0xc000658b40) Stream removed, broadcasting: 3\nI0123 11:33:17.950274 351 log.go:172] (0xc0006e82c0) (0xc0007126e0) Stream removed, broadcasting: 5\nI0123 11:33:17.950930 351 log.go:172] (0xc0006e82c0) Go away received\n" Jan 23 11:33:17.970: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 11:33:17.970: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 11:33:18.024: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:33:18.024: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:18.024: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:18.024: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:28.091: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:33:28.091: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:28.091: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:28.092: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:38.347: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:33:38.347: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:38.347: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:48.064: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:33:48.064: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:48.064: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:33:58.316: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:33:58.316: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 23 11:34:08.046: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update STEP: Rolling back to a previous revision Jan 23 11:34:18.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26spf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 11:34:18.782: INFO: stderr: "I0123 11:34:18.314679 373 log.go:172] (0xc00015c6e0) (0xc0007c2640) Create stream\nI0123 11:34:18.314993 373 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream added, broadcasting: 1\nI0123 11:34:18.324061 373 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0123 11:34:18.324162 373 log.go:172] (0xc00015c6e0) (0xc000644f00) Create stream\nI0123 11:34:18.324179 373 log.go:172] (0xc00015c6e0) (0xc000644f00) Stream added, broadcasting: 3\nI0123 11:34:18.325660 373 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0123 11:34:18.325760 373 log.go:172] (0xc00015c6e0) (0xc000712000) Create stream\nI0123 11:34:18.325775 373 log.go:172] (0xc00015c6e0) (0xc000712000) Stream added, broadcasting: 5\nI0123 11:34:18.327267 373 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0123 11:34:18.651008 373 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0123 11:34:18.651075 373 log.go:172] (0xc000644f00) (3) Data frame handling\nI0123 11:34:18.651105 373 log.go:172] (0xc000644f00) (3) Data frame sent\nI0123 11:34:18.768099 373 log.go:172] (0xc00015c6e0) (0xc000644f00) Stream removed, broadcasting: 3\nI0123 11:34:18.768543 373 log.go:172] (0xc00015c6e0) (0xc000712000) Stream removed, broadcasting: 5\nI0123 11:34:18.768712 373 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0123 11:34:18.768747 373 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0123 11:34:18.768770 373 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0123 11:34:18.768787 373 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0123 11:34:18.768820 373 log.go:172] (0xc00015c6e0) Go away received\nI0123 11:34:18.769422 373 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0123 11:34:18.769464 373 log.go:172] (0xc00015c6e0) (0xc000644f00) Stream removed, broadcasting: 3\nI0123 11:34:18.769483 373 log.go:172] (0xc00015c6e0) (0xc000712000) Stream removed, broadcasting: 5\n" Jan 23 11:34:18.783: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 11:34:18.783: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 11:34:28.912: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 23 11:34:39.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26spf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 11:34:39.612: INFO: stderr: "I0123 11:34:39.310240 395 log.go:172] (0xc000138580) (0xc0005d4c80) Create stream\nI0123 11:34:39.310644 395 log.go:172] (0xc000138580) (0xc0005d4c80) Stream added, broadcasting: 1\nI0123 11:34:39.316722 395 log.go:172] (0xc000138580) Reply frame received for 1\nI0123 11:34:39.316765 395 log.go:172] (0xc000138580) (0xc0005d4dc0) Create stream\nI0123 11:34:39.316777 395 log.go:172] (0xc000138580) (0xc0005d4dc0) Stream added, broadcasting: 3\nI0123 11:34:39.318041 395 log.go:172] (0xc000138580) Reply frame received for 3\nI0123 11:34:39.318074 395 log.go:172] (0xc000138580) (0xc0006fa000) Create stream\nI0123 11:34:39.318090 395 log.go:172] (0xc000138580) (0xc0006fa000) Stream added, broadcasting: 5\nI0123 11:34:39.319743 395 log.go:172] (0xc000138580) Reply frame received for 5\nI0123 11:34:39.441261 395 log.go:172] (0xc000138580) Data frame received for 3\nI0123 11:34:39.441315 395 log.go:172] (0xc0005d4dc0) (3) Data frame handling\nI0123 11:34:39.441352 395 log.go:172] (0xc0005d4dc0) (3) Data frame sent\nI0123 11:34:39.597883 395 log.go:172] (0xc000138580) Data frame received for 1\nI0123 11:34:39.597940 395 log.go:172] (0xc000138580) (0xc0005d4dc0) Stream removed, broadcasting: 3\nI0123 11:34:39.598035 395 log.go:172] (0xc0005d4c80) (1) Data frame handling\nI0123 11:34:39.598062 395 log.go:172] (0xc0005d4c80) (1) Data frame sent\nI0123 11:34:39.598075 395 log.go:172] (0xc000138580) (0xc0005d4c80) Stream removed, broadcasting: 1\nI0123 11:34:39.598346 395 log.go:172] (0xc000138580) (0xc0006fa000) Stream removed, broadcasting: 5\nI0123 11:34:39.598723 395 log.go:172] (0xc000138580) (0xc0005d4c80) Stream removed, broadcasting: 1\nI0123 11:34:39.598754 395 log.go:172] (0xc000138580) (0xc0005d4dc0) Stream removed, broadcasting: 3\nI0123 11:34:39.598761 395 log.go:172] (0xc000138580) (0xc0006fa000) Stream removed, broadcasting: 5\n" Jan 23 11:34:39.612: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 11:34:39.612: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 11:34:50.100: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:34:50.100: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:34:50.100: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:00.130: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:35:00.130: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:00.130: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:10.137: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:35:10.137: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:10.137: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:20.125: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:35:20.125: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:30.123: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update Jan 23 11:35:30.123: INFO: Waiting for Pod e2e-tests-statefulset-26spf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 23 11:35:40.121: INFO: Waiting for StatefulSet e2e-tests-statefulset-26spf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 23 11:35:50.158: INFO: Deleting all statefulset in ns e2e-tests-statefulset-26spf Jan 23 11:35:50.164: INFO: Scaling statefulset ss2 to 0 Jan 23 11:36:20.218: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 11:36:20.231: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:36:20.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-26spf" for this suite. Jan 23 11:36:28.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:36:28.636: INFO: namespace: e2e-tests-statefulset-26spf, resource: bindings, ignored listing per whitelist Jan 23 11:36:28.724: INFO: namespace e2e-tests-statefulset-26spf deletion completed in 8.387069674s • [SLOW TEST:243.489 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:36:28.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 23 11:36:28.934: INFO: Waiting up to 5m0s for pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-hnkdd" to be "success or failure" Jan 23 11:36:28.988: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 54.167989ms Jan 23 11:36:31.098: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163479472s Jan 23 11:36:33.111: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17683782s Jan 23 11:36:35.402: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467647367s Jan 23 11:36:37.418: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483833476s Jan 23 11:36:39.434: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.500390697s STEP: Saw pod success Jan 23 11:36:39.435: INFO: Pod "pod-98815c7d-3dd4-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:36:39.443: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-98815c7d-3dd4-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:36:40.211: INFO: Waiting for pod pod-98815c7d-3dd4-11ea-bb65-0242ac110005 to disappear Jan 23 11:36:40.249: INFO: Pod pod-98815c7d-3dd4-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:36:40.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hnkdd" for this suite. Jan 23 11:36:46.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:36:46.800: INFO: namespace: e2e-tests-emptydir-hnkdd, resource: bindings, ignored listing per whitelist Jan 23 11:36:46.851: INFO: namespace e2e-tests-emptydir-hnkdd deletion completed in 6.590666346s • [SLOW TEST:18.127 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:36:46.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a35e0744-3dd4-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:36:47.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-mfft6" to be "success or failure" Jan 23 11:36:47.224: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.41126ms Jan 23 11:36:49.242: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069575058s Jan 23 11:36:51.279: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106184084s Jan 23 11:36:53.296: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122683886s Jan 23 11:36:56.133: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960599997s Jan 23 11:36:58.151: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.97852959s STEP: Saw pod success Jan 23 11:36:58.152: INFO: Pod "pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:36:58.159: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 11:36:58.242: INFO: Waiting for pod pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005 to disappear Jan 23 11:36:58.255: INFO: Pod pod-projected-configmaps-a35f3a48-3dd4-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:36:58.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mfft6" for this suite. Jan 23 11:37:04.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:37:04.427: INFO: namespace: e2e-tests-projected-mfft6, resource: bindings, ignored listing per whitelist Jan 23 11:37:04.558: INFO: namespace e2e-tests-projected-mfft6 deletion completed in 6.291567287s • [SLOW TEST:17.706 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:37:04.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0123 11:37:17.791361 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 11:37:17.791: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:37:17.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-98qvz" for this suite. Jan 23 11:37:44.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:37:44.997: INFO: namespace: e2e-tests-gc-98qvz, resource: bindings, ignored listing per whitelist Jan 23 11:37:45.041: INFO: namespace e2e-tests-gc-98qvz deletion completed in 27.245241281s • [SLOW TEST:40.482 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:37:45.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 23 11:37:55.499: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-c60e3a44-3dd4-11ea-bb65-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-75tkb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-75tkb/pods/pod-submit-remove-c60e3a44-3dd4-11ea-bb65-0242ac110005", UID:"c61aca0f-3dd4-11ea-a994-fa163e34d433", ResourceVersion:"19182284", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715376265, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"340443806"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hkgr7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0012e1a40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hkgr7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002020468), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f72a20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020204a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020204c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020204c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020204cc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715376265, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715376274, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715376274, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715376265, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001fb6de0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fb6e80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://353e52b65139e4ec9847b135cba03728a6314887d3d98f482303d6ff4b8e097b"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:38:12.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-75tkb" for this suite. Jan 23 11:38:18.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:38:19.019: INFO: namespace: e2e-tests-pods-75tkb, resource: bindings, ignored listing per whitelist Jan 23 11:38:19.038: INFO: namespace e2e-tests-pods-75tkb deletion completed in 6.272650331s • [SLOW TEST:33.994 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:38:19.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 11:38:19.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p7vkm' Jan 23 11:38:21.492: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 11:38:21.492: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 23 11:38:23.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-p7vkm' Jan 23 11:38:23.930: INFO: stderr: "" Jan 23 11:38:23.930: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:38:23.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p7vkm" for this suite. Jan 23 11:38:30.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:38:30.794: INFO: namespace: e2e-tests-kubectl-p7vkm, resource: bindings, ignored listing per whitelist Jan 23 11:38:30.868: INFO: namespace e2e-tests-kubectl-p7vkm deletion completed in 6.732249712s • [SLOW TEST:11.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:38:30.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e15d6ec4-3dd4-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:38:31.195: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-cv5xd" to be "success or failure" Jan 23 11:38:31.430: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 235.446264ms Jan 23 11:38:33.677: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481576523s Jan 23 11:38:35.710: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515078952s Jan 23 11:38:38.042: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847442663s Jan 23 11:38:40.125: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930009283s Jan 23 11:38:42.348: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.152953664s STEP: Saw pod success Jan 23 11:38:42.348: INFO: Pod "pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:38:42.363: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 23 11:38:42.651: INFO: Waiting for pod pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005 to disappear Jan 23 11:38:42.666: INFO: Pod pod-projected-secrets-e15f72bd-3dd4-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:38:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cv5xd" for this suite. Jan 23 11:38:48.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:38:48.958: INFO: namespace: e2e-tests-projected-cv5xd, resource: bindings, ignored listing per whitelist Jan 23 11:38:48.964: INFO: namespace e2e-tests-projected-cv5xd deletion completed in 6.283004056s • [SLOW TEST:18.095 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:38:48.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ec14d0d7-3dd4-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:38:49.162: INFO: Waiting up to 5m0s for pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-tttm5" to be "success or failure" Jan 23 11:38:49.171: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177167ms Jan 23 11:38:51.211: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049289368s Jan 23 11:38:53.240: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078164379s Jan 23 11:38:55.518: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356346751s Jan 23 11:38:57.608: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446488909s Jan 23 11:39:00.053: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.891345908s STEP: Saw pod success Jan 23 11:39:00.053: INFO: Pod "pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:39:00.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 11:39:00.432: INFO: Waiting for pod pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005 to disappear Jan 23 11:39:00.655: INFO: Pod pod-secrets-ec158a74-3dd4-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:39:00.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tttm5" for this suite. Jan 23 11:39:09.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:39:09.180: INFO: namespace: e2e-tests-secrets-tttm5, resource: bindings, ignored listing per whitelist Jan 23 11:39:09.226: INFO: namespace e2e-tests-secrets-tttm5 deletion completed in 8.533224444s • [SLOW TEST:20.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:39:09.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 23 11:39:09.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:09.920: INFO: stderr: "" Jan 23 11:39:09.920: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:39:09.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:10.054: INFO: stderr: "" Jan 23 11:39:10.054: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 23 11:39:15.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:15.220: INFO: stderr: "" Jan 23 11:39:15.221: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " Jan 23 11:39:15.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:15.347: INFO: stderr: "" Jan 23 11:39:15.347: INFO: stdout: "" Jan 23 11:39:15.347: INFO: update-demo-nautilus-fw458 is created but not running Jan 23 11:39:20.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:20.535: INFO: stderr: "" Jan 23 11:39:20.535: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " Jan 23 11:39:20.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:20.671: INFO: stderr: "" Jan 23 11:39:20.671: INFO: stdout: "true" Jan 23 11:39:20.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:20.787: INFO: stderr: "" Jan 23 11:39:20.787: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:20.787: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:20.810: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:20.810: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:20.810: INFO: update-demo-nautilus-fw458 is verified up and running Jan 23 11:39:20.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz5zs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:20.921: INFO: stderr: "" Jan 23 11:39:20.921: INFO: stdout: "" Jan 23 11:39:20.921: INFO: update-demo-nautilus-jz5zs is created but not running Jan 23 11:39:25.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:26.110: INFO: stderr: "" Jan 23 11:39:26.110: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " Jan 23 11:39:26.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:26.233: INFO: stderr: "" Jan 23 11:39:26.233: INFO: stdout: "true" Jan 23 11:39:26.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:26.347: INFO: stderr: "" Jan 23 11:39:26.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:26.347: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:26.370: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:26.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:26.370: INFO: update-demo-nautilus-fw458 is verified up and running Jan 23 11:39:26.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz5zs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:26.505: INFO: stderr: "" Jan 23 11:39:26.505: INFO: stdout: "true" Jan 23 11:39:26.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz5zs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:26.627: INFO: stderr: "" Jan 23 11:39:26.627: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:26.627: INFO: validating pod update-demo-nautilus-jz5zs Jan 23 11:39:26.638: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:26.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:26.638: INFO: update-demo-nautilus-jz5zs is verified up and running STEP: scaling down the replication controller Jan 23 11:39:26.644: INFO: scanned /root for discovery docs: Jan 23 11:39:26.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:28.162: INFO: stderr: "" Jan 23 11:39:28.162: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:39:28.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:28.390: INFO: stderr: "" Jan 23 11:39:28.390: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 23 11:39:33.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:33.575: INFO: stderr: "" Jan 23 11:39:33.575: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 23 11:39:38.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:38.813: INFO: stderr: "" Jan 23 11:39:38.813: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-jz5zs " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 23 11:39:43.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:44.001: INFO: stderr: "" Jan 23 11:39:44.001: INFO: stdout: "update-demo-nautilus-fw458 " Jan 23 11:39:44.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:44.122: INFO: stderr: "" Jan 23 11:39:44.122: INFO: stdout: "true" Jan 23 11:39:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:44.216: INFO: stderr: "" Jan 23 11:39:44.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:44.217: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:44.226: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:44.226: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:44.226: INFO: update-demo-nautilus-fw458 is verified up and running STEP: scaling up the replication controller Jan 23 11:39:44.228: INFO: scanned /root for discovery docs: Jan 23 11:39:44.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:45.440: INFO: stderr: "" Jan 23 11:39:45.440: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:39:45.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:45.607: INFO: stderr: "" Jan 23 11:39:45.607: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-zz8rt " Jan 23 11:39:45.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:45.740: INFO: stderr: "" Jan 23 11:39:45.741: INFO: stdout: "true" Jan 23 11:39:45.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:45.857: INFO: stderr: "" Jan 23 11:39:45.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:45.857: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:45.877: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:45.878: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:45.878: INFO: update-demo-nautilus-fw458 is verified up and running Jan 23 11:39:45.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8rt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:46.006: INFO: stderr: "" Jan 23 11:39:46.006: INFO: stdout: "" Jan 23 11:39:46.006: INFO: update-demo-nautilus-zz8rt is created but not running Jan 23 11:39:51.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:51.424: INFO: stderr: "" Jan 23 11:39:51.425: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-zz8rt " Jan 23 11:39:51.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:51.589: INFO: stderr: "" Jan 23 11:39:51.590: INFO: stdout: "true" Jan 23 11:39:51.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:51.692: INFO: stderr: "" Jan 23 11:39:51.692: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:51.692: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:51.704: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:51.704: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:51.704: INFO: update-demo-nautilus-fw458 is verified up and running Jan 23 11:39:51.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8rt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:51.796: INFO: stderr: "" Jan 23 11:39:51.796: INFO: stdout: "" Jan 23 11:39:51.796: INFO: update-demo-nautilus-zz8rt is created but not running Jan 23 11:39:56.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.034: INFO: stderr: "" Jan 23 11:39:57.034: INFO: stdout: "update-demo-nautilus-fw458 update-demo-nautilus-zz8rt " Jan 23 11:39:57.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.172: INFO: stderr: "" Jan 23 11:39:57.172: INFO: stdout: "true" Jan 23 11:39:57.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fw458 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.321: INFO: stderr: "" Jan 23 11:39:57.321: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:57.321: INFO: validating pod update-demo-nautilus-fw458 Jan 23 11:39:57.328: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:57.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:57.328: INFO: update-demo-nautilus-fw458 is verified up and running Jan 23 11:39:57.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8rt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.449: INFO: stderr: "" Jan 23 11:39:57.449: INFO: stdout: "true" Jan 23 11:39:57.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8rt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.574: INFO: stderr: "" Jan 23 11:39:57.575: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:39:57.575: INFO: validating pod update-demo-nautilus-zz8rt Jan 23 11:39:57.629: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:39:57.629: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:39:57.629: INFO: update-demo-nautilus-zz8rt is verified up and running STEP: using delete to clean up resources Jan 23 11:39:57.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:57.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 11:39:57.837: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 23 11:39:57.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-s6sdl' Jan 23 11:39:58.066: INFO: stderr: "No resources found.\n" Jan 23 11:39:58.067: INFO: stdout: "" Jan 23 11:39:58.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-s6sdl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 11:39:58.191: INFO: stderr: "" Jan 23 11:39:58.191: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:39:58.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s6sdl" for this suite. Jan 23 11:40:22.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:40:22.610: INFO: namespace: e2e-tests-kubectl-s6sdl, resource: bindings, ignored listing per whitelist Jan 23 11:40:22.610: INFO: namespace e2e-tests-kubectl-s6sdl deletion completed in 24.400666988s • [SLOW TEST:73.383 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:40:22.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 23 11:40:22.831: INFO: Number of nodes with available pods: 0 Jan 23 11:40:22.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:23.854: INFO: Number of nodes with available pods: 0 Jan 23 11:40:23.854: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:24.899: INFO: Number of nodes with available pods: 0 Jan 23 11:40:24.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:25.853: INFO: Number of nodes with available pods: 0 Jan 23 11:40:25.853: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:26.883: INFO: Number of nodes with available pods: 0 Jan 23 11:40:26.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:29.145: INFO: Number of nodes with available pods: 0 Jan 23 11:40:29.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:30.201: INFO: Number of nodes with available pods: 0 Jan 23 11:40:30.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:30.854: INFO: Number of nodes with available pods: 0 Jan 23 11:40:30.854: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:31.938: INFO: Number of nodes with available pods: 0 Jan 23 11:40:31.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:40:32.859: INFO: Number of nodes with available pods: 1 Jan 23 11:40:32.859: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 23 11:40:32.930: INFO: Number of nodes with available pods: 1 Jan 23 11:40:32.930: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tjwr2, will wait for the garbage collector to delete the pods Jan 23 11:40:34.289: INFO: Deleting DaemonSet.extensions daemon-set took: 14.793488ms Jan 23 11:40:34.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.973499ms Jan 23 11:40:40.452: INFO: Number of nodes with available pods: 0 Jan 23 11:40:40.452: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 11:40:40.492: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tjwr2/daemonsets","resourceVersion":"19182691"},"items":null} Jan 23 11:40:40.614: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tjwr2/pods","resourceVersion":"19182691"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:40:40.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tjwr2" for this suite. Jan 23 11:40:48.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:40:48.785: INFO: namespace: e2e-tests-daemonsets-tjwr2, resource: bindings, ignored listing per whitelist Jan 23 11:40:48.855: INFO: namespace e2e-tests-daemonsets-tjwr2 deletion completed in 8.188034904s • [SLOW TEST:26.245 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:40:48.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:40:49.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-qv2bb" to be "success or failure" Jan 23 11:40:49.138: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.392593ms Jan 23 11:40:51.192: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072708827s Jan 23 11:40:53.214: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094580445s Jan 23 11:40:55.585: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465458046s Jan 23 11:40:57.601: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481424839s Jan 23 11:40:59.615: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495531102s STEP: Saw pod success Jan 23 11:40:59.615: INFO: Pod "downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:40:59.624: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:41:00.661: INFO: Waiting for pod downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005 to disappear Jan 23 11:41:00.908: INFO: Pod downwardapi-volume-3396af31-3dd5-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:41:00.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qv2bb" for this suite. Jan 23 11:41:07.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:41:07.111: INFO: namespace: e2e-tests-projected-qv2bb, resource: bindings, ignored listing per whitelist Jan 23 11:41:07.173: INFO: namespace e2e-tests-projected-qv2bb deletion completed in 6.25065883s • [SLOW TEST:18.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:41:07.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 23 11:41:07.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:07.875: INFO: stderr: "" Jan 23 11:41:07.876: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:41:07.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:08.250: INFO: stderr: "" Jan 23 11:41:08.250: INFO: stdout: "update-demo-nautilus-klgfg " STEP: Replicas for name=update-demo: expected=2 actual=1 Jan 23 11:41:13.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:13.428: INFO: stderr: "" Jan 23 11:41:13.428: INFO: stdout: "update-demo-nautilus-k25q9 update-demo-nautilus-klgfg " Jan 23 11:41:13.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k25q9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:13.607: INFO: stderr: "" Jan 23 11:41:13.608: INFO: stdout: "" Jan 23 11:41:13.608: INFO: update-demo-nautilus-k25q9 is created but not running Jan 23 11:41:18.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:18.770: INFO: stderr: "" Jan 23 11:41:18.770: INFO: stdout: "update-demo-nautilus-k25q9 update-demo-nautilus-klgfg " Jan 23 11:41:18.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k25q9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:18.998: INFO: stderr: "" Jan 23 11:41:18.998: INFO: stdout: "" Jan 23 11:41:18.998: INFO: update-demo-nautilus-k25q9 is created but not running Jan 23 11:41:23.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:24.100: INFO: stderr: "" Jan 23 11:41:24.100: INFO: stdout: "update-demo-nautilus-k25q9 update-demo-nautilus-klgfg " Jan 23 11:41:24.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k25q9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:24.215: INFO: stderr: "" Jan 23 11:41:24.215: INFO: stdout: "true" Jan 23 11:41:24.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k25q9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:24.312: INFO: stderr: "" Jan 23 11:41:24.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:41:24.312: INFO: validating pod update-demo-nautilus-k25q9 Jan 23 11:41:24.326: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:41:24.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:41:24.326: INFO: update-demo-nautilus-k25q9 is verified up and running Jan 23 11:41:24.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klgfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:24.436: INFO: stderr: "" Jan 23 11:41:24.436: INFO: stdout: "true" Jan 23 11:41:24.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klgfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:24.563: INFO: stderr: "" Jan 23 11:41:24.563: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:41:24.563: INFO: validating pod update-demo-nautilus-klgfg Jan 23 11:41:24.577: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:41:24.578: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:41:24.578: INFO: update-demo-nautilus-klgfg is verified up and running STEP: rolling-update to new replication controller Jan 23 11:41:24.581: INFO: scanned /root for discovery docs: Jan 23 11:41:24.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:58.356: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 23 11:41:58.356: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:41:58.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:58.581: INFO: stderr: "" Jan 23 11:41:58.582: INFO: stdout: "update-demo-kitten-2jnqx update-demo-kitten-zlfmr " Jan 23 11:41:58.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2jnqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:58.700: INFO: stderr: "" Jan 23 11:41:58.700: INFO: stdout: "true" Jan 23 11:41:58.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2jnqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:58.805: INFO: stderr: "" Jan 23 11:41:58.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 23 11:41:58.805: INFO: validating pod update-demo-kitten-2jnqx Jan 23 11:41:58.826: INFO: got data: { "image": "kitten.jpg" } Jan 23 11:41:58.826: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 23 11:41:58.826: INFO: update-demo-kitten-2jnqx is verified up and running Jan 23 11:41:58.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zlfmr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:58.954: INFO: stderr: "" Jan 23 11:41:58.954: INFO: stdout: "true" Jan 23 11:41:58.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zlfmr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmrlt' Jan 23 11:41:59.181: INFO: stderr: "" Jan 23 11:41:59.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 23 11:41:59.181: INFO: validating pod update-demo-kitten-zlfmr Jan 23 11:41:59.199: INFO: got data: { "image": "kitten.jpg" } Jan 23 11:41:59.199: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 23 11:41:59.199: INFO: update-demo-kitten-zlfmr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:41:59.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jmrlt" for this suite. Jan 23 11:42:39.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:42:39.396: INFO: namespace: e2e-tests-kubectl-jmrlt, resource: bindings, ignored listing per whitelist Jan 23 11:42:39.399: INFO: namespace e2e-tests-kubectl-jmrlt deletion completed in 40.190265822s • [SLOW TEST:92.225 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:42:39.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 23 11:42:39.626: INFO: Waiting up to 5m0s for pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005" in namespace "e2e-tests-containers-qmkjh" to be "success or failure" Jan 23 11:42:39.652: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.018116ms Jan 23 11:42:41.753: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126930692s Jan 23 11:42:43.770: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143962796s Jan 23 11:42:46.172: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545053656s Jan 23 11:42:48.222: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595745251s Jan 23 11:42:50.253: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.626151923s STEP: Saw pod success Jan 23 11:42:50.253: INFO: Pod "client-containers-75750297-3dd5-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:42:50.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-75750297-3dd5-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:42:50.420: INFO: Waiting for pod client-containers-75750297-3dd5-11ea-bb65-0242ac110005 to disappear Jan 23 11:42:50.468: INFO: Pod client-containers-75750297-3dd5-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:42:50.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qmkjh" for this suite. Jan 23 11:42:56.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:42:56.638: INFO: namespace: e2e-tests-containers-qmkjh, resource: bindings, ignored listing per whitelist Jan 23 11:42:56.722: INFO: namespace e2e-tests-containers-qmkjh deletion completed in 6.240680526s • [SLOW TEST:17.323 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:42:56.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:42:56.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 23 11:42:57.084: INFO: stderr: "" Jan 23 11:42:57.084: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:42:57.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d2lmr" for this suite. Jan 23 11:43:05.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:43:05.294: INFO: namespace: e2e-tests-kubectl-d2lmr, resource: bindings, ignored listing per whitelist Jan 23 11:43:05.310: INFO: namespace e2e-tests-kubectl-d2lmr deletion completed in 8.215890095s • [SLOW TEST:8.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:43:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:44:05.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-tsdwj" for this suite. Jan 23 11:44:29.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:44:29.705: INFO: namespace: e2e-tests-container-probe-tsdwj, resource: bindings, ignored listing per whitelist Jan 23 11:44:29.802: INFO: namespace e2e-tests-container-probe-tsdwj deletion completed in 24.239664485s • [SLOW TEST:84.491 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:44:29.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:44:30.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-trfm6" to be "success or failure" Jan 23 11:44:30.134: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.552942ms Jan 23 11:44:32.345: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271384752s Jan 23 11:44:34.360: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285714317s Jan 23 11:44:36.564: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489810375s Jan 23 11:44:38.604: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530077401s Jan 23 11:44:40.623: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.548810878s STEP: Saw pod success Jan 23 11:44:40.623: INFO: Pod "downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:44:40.628: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:44:40.675: INFO: Waiting for pod downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005 to disappear Jan 23 11:44:40.690: INFO: Pod downwardapi-volume-b744b4ee-3dd5-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:44:40.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-trfm6" for this suite. Jan 23 11:44:48.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:44:48.856: INFO: namespace: e2e-tests-projected-trfm6, resource: bindings, ignored listing per whitelist Jan 23 11:44:48.913: INFO: namespace e2e-tests-projected-trfm6 deletion completed in 8.216052677s • [SLOW TEST:19.111 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:44:48.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:44:49.147: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 23 11:44:49.155: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bvbll/daemonsets","resourceVersion":"19183238"},"items":null} Jan 23 11:44:49.158: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bvbll/pods","resourceVersion":"19183238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:44:49.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bvbll" for this suite. Jan 23 11:44:55.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:44:55.266: INFO: namespace: e2e-tests-daemonsets-bvbll, resource: bindings, ignored listing per whitelist Jan 23 11:44:55.356: INFO: namespace e2e-tests-daemonsets-bvbll deletion completed in 6.186169092s S [SKIPPING] [6.443 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:44:49.147: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:44:55.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c6782069-3dd5-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:44:55.565: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-gfvxb" to be "success or failure" Jan 23 11:44:55.573: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.610248ms Jan 23 11:44:57.736: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17154185s Jan 23 11:44:59.752: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187221184s Jan 23 11:45:01.979: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413802882s Jan 23 11:45:04.030: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465206154s Jan 23 11:45:06.047: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482363069s STEP: Saw pod success Jan 23 11:45:06.047: INFO: Pod "pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:45:06.053: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 11:45:06.308: INFO: Waiting for pod pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005 to disappear Jan 23 11:45:06.342: INFO: Pod pod-projected-configmaps-c67b0bbc-3dd5-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:45:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gfvxb" for this suite. Jan 23 11:45:12.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:45:12.732: INFO: namespace: e2e-tests-projected-gfvxb, resource: bindings, ignored listing per whitelist Jan 23 11:45:12.755: INFO: namespace e2e-tests-projected-gfvxb deletion completed in 6.400537205s • [SLOW TEST:17.399 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:45:12.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 23 11:45:31.352: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:31.370: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:33.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:33.431: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:35.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:35.392: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:37.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:37.387: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:39.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:39.400: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:41.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:41.396: INFO: Pod pod-with-poststart-http-hook still exists Jan 23 11:45:43.371: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 23 11:45:43.393: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:45:43.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vxhlz" for this suite. Jan 23 11:46:07.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:46:07.617: INFO: namespace: e2e-tests-container-lifecycle-hook-vxhlz, resource: bindings, ignored listing per whitelist Jan 23 11:46:07.685: INFO: namespace e2e-tests-container-lifecycle-hook-vxhlz deletion completed in 24.279668786s • [SLOW TEST:54.929 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:46:07.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:46:07.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-r6g98" to be "success or failure" Jan 23 11:46:07.994: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.593145ms Jan 23 11:46:10.396: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428266525s Jan 23 11:46:12.409: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440805358s Jan 23 11:46:14.429: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461001138s Jan 23 11:46:16.457: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488599723s Jan 23 11:46:18.473: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504391003s STEP: Saw pod success Jan 23 11:46:18.473: INFO: Pod "downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:46:18.500: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:46:19.698: INFO: Waiting for pod downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005 to disappear Jan 23 11:46:19.944: INFO: Pod downwardapi-volume-f1a2b037-3dd5-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:46:19.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r6g98" for this suite. Jan 23 11:46:26.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:46:26.274: INFO: namespace: e2e-tests-projected-r6g98, resource: bindings, ignored listing per whitelist Jan 23 11:46:26.415: INFO: namespace e2e-tests-projected-r6g98 deletion completed in 6.44533493s • [SLOW TEST:18.730 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:46:26.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-bgml STEP: Creating a pod to test atomic-volume-subpath Jan 23 11:46:26.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bgml" in namespace "e2e-tests-subpath-vx8bw" to be "success or failure" Jan 23 11:46:26.793: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 25.302202ms Jan 23 11:46:28.811: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043075825s Jan 23 11:46:30.825: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057145235s Jan 23 11:46:32.842: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074698521s Jan 23 11:46:34.866: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098638064s Jan 23 11:46:36.925: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157345232s Jan 23 11:46:38.995: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 12.227528271s Jan 23 11:46:41.108: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Pending", Reason="", readiness=false. Elapsed: 14.340065713s Jan 23 11:46:43.120: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=true. Elapsed: 16.352543945s Jan 23 11:46:45.137: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 18.369108513s Jan 23 11:46:47.148: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 20.380943266s Jan 23 11:46:49.161: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 22.393295254s Jan 23 11:46:51.183: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 24.415149969s Jan 23 11:46:53.245: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 26.477482134s Jan 23 11:46:55.258: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 28.490262536s Jan 23 11:46:57.300: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 30.532122179s Jan 23 11:46:59.318: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 32.550808994s Jan 23 11:47:01.335: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Running", Reason="", readiness=false. Elapsed: 34.56738718s Jan 23 11:47:03.356: INFO: Pod "pod-subpath-test-downwardapi-bgml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.588165475s STEP: Saw pod success Jan 23 11:47:03.356: INFO: Pod "pod-subpath-test-downwardapi-bgml" satisfied condition "success or failure" Jan 23 11:47:03.361: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-bgml container test-container-subpath-downwardapi-bgml: STEP: delete the pod Jan 23 11:47:03.666: INFO: Waiting for pod pod-subpath-test-downwardapi-bgml to disappear Jan 23 11:47:03.675: INFO: Pod pod-subpath-test-downwardapi-bgml no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bgml Jan 23 11:47:03.675: INFO: Deleting pod "pod-subpath-test-downwardapi-bgml" in namespace "e2e-tests-subpath-vx8bw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:47:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vx8bw" for this suite. Jan 23 11:47:09.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:47:09.844: INFO: namespace: e2e-tests-subpath-vx8bw, resource: bindings, ignored listing per whitelist Jan 23 11:47:09.916: INFO: namespace e2e-tests-subpath-vx8bw deletion completed in 6.228331621s • [SLOW TEST:43.500 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:47:09.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:47:20.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-7sz92" for this suite. Jan 23 11:47:26.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:47:26.772: INFO: namespace: e2e-tests-emptydir-wrapper-7sz92, resource: bindings, ignored listing per whitelist Jan 23 11:47:26.802: INFO: namespace e2e-tests-emptydir-wrapper-7sz92 deletion completed in 6.268028785s • [SLOW TEST:16.886 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:47:26.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:47:27.019: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 23 11:47:27.086: INFO: Number of nodes with available pods: 0 Jan 23 11:47:27.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:28.109: INFO: Number of nodes with available pods: 0 Jan 23 11:47:28.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:29.346: INFO: Number of nodes with available pods: 0 Jan 23 11:47:29.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:30.107: INFO: Number of nodes with available pods: 0 Jan 23 11:47:30.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:31.162: INFO: Number of nodes with available pods: 0 Jan 23 11:47:31.162: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:32.542: INFO: Number of nodes with available pods: 0 Jan 23 11:47:32.543: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:33.163: INFO: Number of nodes with available pods: 0 Jan 23 11:47:33.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:34.131: INFO: Number of nodes with available pods: 0 Jan 23 11:47:34.131: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:35.121: INFO: Number of nodes with available pods: 0 Jan 23 11:47:35.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:36.126: INFO: Number of nodes with available pods: 1 Jan 23 11:47:36.126: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 23 11:47:36.204: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:37.330: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:38.314: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:39.325: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:40.313: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:41.319: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:42.312: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:43.339: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:43.339: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:44.312: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:44.313: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:45.315: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:45.315: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:46.318: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:46.318: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:47.319: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:47.319: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:48.325: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:48.325: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:49.319: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:49.319: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:50.318: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:50.318: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:51.317: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:51.317: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:52.314: INFO: Wrong image for pod: daemon-set-98xnd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 23 11:47:52.314: INFO: Pod daemon-set-98xnd is not available Jan 23 11:47:53.589: INFO: Pod daemon-set-qqtn6 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 23 11:47:54.363: INFO: Number of nodes with available pods: 0 Jan 23 11:47:54.363: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:55.642: INFO: Number of nodes with available pods: 0 Jan 23 11:47:55.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:56.430: INFO: Number of nodes with available pods: 0 Jan 23 11:47:56.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:57.399: INFO: Number of nodes with available pods: 0 Jan 23 11:47:57.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:58.413: INFO: Number of nodes with available pods: 0 Jan 23 11:47:58.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:47:59.750: INFO: Number of nodes with available pods: 0 Jan 23 11:47:59.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:48:00.807: INFO: Number of nodes with available pods: 0 Jan 23 11:48:00.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:48:01.384: INFO: Number of nodes with available pods: 0 Jan 23 11:48:01.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:48:02.375: INFO: Number of nodes with available pods: 0 Jan 23 11:48:02.375: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 23 11:48:03.397: INFO: Number of nodes with available pods: 1 Jan 23 11:48:03.397: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tw9cj, will wait for the garbage collector to delete the pods Jan 23 11:48:03.522: INFO: Deleting DaemonSet.extensions daemon-set took: 35.029922ms Jan 23 11:48:03.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.625301ms Jan 23 11:48:12.762: INFO: Number of nodes with available pods: 0 Jan 23 11:48:12.762: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 11:48:12.773: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tw9cj/daemonsets","resourceVersion":"19183689"},"items":null} Jan 23 11:48:12.779: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tw9cj/pods","resourceVersion":"19183689"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:48:12.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tw9cj" for this suite. Jan 23 11:48:20.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:48:21.011: INFO: namespace: e2e-tests-daemonsets-tw9cj, resource: bindings, ignored listing per whitelist Jan 23 11:48:21.069: INFO: namespace e2e-tests-daemonsets-tw9cj deletion completed in 8.1950259s • [SLOW TEST:54.266 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:48:21.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:48:21.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-v66vm" to be "success or failure" Jan 23 11:48:21.641: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.053757ms Jan 23 11:48:23.912: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372724511s Jan 23 11:48:25.934: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394773884s Jan 23 11:48:28.204: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664658058s Jan 23 11:48:30.216: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676993207s Jan 23 11:48:32.484: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.944784174s STEP: Saw pod success Jan 23 11:48:32.484: INFO: Pod "downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:48:32.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:48:32.671: INFO: Waiting for pod downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005 to disappear Jan 23 11:48:32.745: INFO: Pod downwardapi-volume-413e2681-3dd6-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:48:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v66vm" for this suite. Jan 23 11:48:38.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:48:38.845: INFO: namespace: e2e-tests-downward-api-v66vm, resource: bindings, ignored listing per whitelist Jan 23 11:48:38.905: INFO: namespace e2e-tests-downward-api-v66vm deletion completed in 6.1491243s • [SLOW TEST:17.836 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:48:38.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005 Jan 23 11:48:39.149: INFO: Pod name my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005: Found 0 pods out of 1 Jan 23 11:48:44.234: INFO: Pod name my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005: Found 1 pods out of 1 Jan 23 11:48:44.234: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005" are running Jan 23 11:48:48.294: INFO: Pod "my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005-nqkdb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:48:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:48:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:48:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:48:39 +0000 UTC Reason: Message:}]) Jan 23 11:48:48.294: INFO: Trying to dial the pod Jan 23 11:48:53.355: INFO: Controller my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005: Got expected result from replica 1 [my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005-nqkdb]: "my-hostname-basic-4baee4a7-3dd6-11ea-bb65-0242ac110005-nqkdb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:48:53.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-w9qpf" for this suite. Jan 23 11:49:01.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:49:01.429: INFO: namespace: e2e-tests-replication-controller-w9qpf, resource: bindings, ignored listing per whitelist Jan 23 11:49:01.635: INFO: namespace e2e-tests-replication-controller-w9qpf deletion completed in 8.267712505s • [SLOW TEST:22.730 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:49:01.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hp5zb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hp5zb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 23 11:49:19.055: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.064: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.082: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.088: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.095: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.101: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.108: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.113: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.118: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.123: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.135: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.142: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.147: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.156: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.161: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.165: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.170: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.174: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.179: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.184: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005: the server could not find the requested resource (get pods dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005) Jan 23 11:49:19.184: INFO: Lookups using e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hp5zb.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 23 11:49:24.355: INFO: DNS probes using e2e-tests-dns-hp5zb/dns-test-59d2f250-3dd6-11ea-bb65-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:49:24.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-hp5zb" for this suite. Jan 23 11:49:32.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:49:32.629: INFO: namespace: e2e-tests-dns-hp5zb, resource: bindings, ignored listing per whitelist Jan 23 11:49:32.720: INFO: namespace e2e-tests-dns-hp5zb deletion completed in 8.215276956s • [SLOW TEST:31.085 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:49:32.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 11:49:32.929: INFO: Creating ReplicaSet my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005 Jan 23 11:49:33.132: INFO: Pod name my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005: Found 0 pods out of 1 Jan 23 11:49:38.152: INFO: Pod name my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005: Found 1 pods out of 1 Jan 23 11:49:38.152: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005" is running Jan 23 11:49:42.186: INFO: Pod "my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005-qwgcp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:49:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:49:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:49:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 11:49:33 +0000 UTC Reason: Message:}]) Jan 23 11:49:42.186: INFO: Trying to dial the pod Jan 23 11:49:47.247: INFO: Controller my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005: Got expected result from replica 1 [my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005-qwgcp]: "my-hostname-basic-6bcfb3b6-3dd6-11ea-bb65-0242ac110005-qwgcp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:49:47.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-l7xbv" for this suite. Jan 23 11:49:53.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:49:53.623: INFO: namespace: e2e-tests-replicaset-l7xbv, resource: bindings, ignored listing per whitelist Jan 23 11:49:53.741: INFO: namespace e2e-tests-replicaset-l7xbv deletion completed in 6.484979664s • [SLOW TEST:21.020 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:49:53.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-d62bz/configmap-test-785906ad-3dd6-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:49:54.025: INFO: Waiting up to 5m0s for pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-d62bz" to be "success or failure" Jan 23 11:49:54.224: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 199.412236ms Jan 23 11:49:56.247: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222254821s Jan 23 11:49:58.967: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94244482s Jan 23 11:50:00.978: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.952963912s Jan 23 11:50:03.029: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.004330792s Jan 23 11:50:05.307: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.28191363s Jan 23 11:50:07.324: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.299527311s STEP: Saw pod success Jan 23 11:50:07.325: INFO: Pod "pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:50:07.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005 container env-test: STEP: delete the pod Jan 23 11:50:07.450: INFO: Waiting for pod pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005 to disappear Jan 23 11:50:07.605: INFO: Pod pod-configmaps-785bbc92-3dd6-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:50:07.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d62bz" for this suite. Jan 23 11:50:13.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:50:13.747: INFO: namespace: e2e-tests-configmap-d62bz, resource: bindings, ignored listing per whitelist Jan 23 11:50:13.905: INFO: namespace e2e-tests-configmap-d62bz deletion completed in 6.285641889s • [SLOW TEST:20.163 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:50:13.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 11:50:14.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-rzzn8" to be "success or failure" Jan 23 11:50:14.210: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.029431ms Jan 23 11:50:16.238: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062551834s Jan 23 11:50:18.250: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073903243s Jan 23 11:50:20.268: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092418621s Jan 23 11:50:22.283: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107207955s Jan 23 11:50:24.401: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225552299s STEP: Saw pod success Jan 23 11:50:24.402: INFO: Pod "downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:50:24.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 11:50:24.589: INFO: Waiting for pod downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005 to disappear Jan 23 11:50:24.614: INFO: Pod downwardapi-volume-8462b4ec-3dd6-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:50:24.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rzzn8" for this suite. Jan 23 11:50:30.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:50:30.832: INFO: namespace: e2e-tests-downward-api-rzzn8, resource: bindings, ignored listing per whitelist Jan 23 11:50:30.908: INFO: namespace e2e-tests-downward-api-rzzn8 deletion completed in 6.282662764s • [SLOW TEST:17.003 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:50:30.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-mplh9 Jan 23 11:50:39.345: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-mplh9 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 11:50:39.363: INFO: Initial restart count of pod liveness-exec is 0 Jan 23 11:51:32.066: INFO: Restart count of pod e2e-tests-container-probe-mplh9/liveness-exec is now 1 (52.702970851s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:51:32.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mplh9" for this suite. Jan 23 11:51:38.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:51:38.543: INFO: namespace: e2e-tests-container-probe-mplh9, resource: bindings, ignored listing per whitelist Jan 23 11:51:38.882: INFO: namespace e2e-tests-container-probe-mplh9 deletion completed in 6.595407035s • [SLOW TEST:67.974 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:51:38.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-b6fce005-3dd6-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:51:39.101: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-6xjxs" to be "success or failure" Jan 23 11:51:39.112: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.678877ms Jan 23 11:51:41.136: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034980847s Jan 23 11:51:43.156: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055078788s Jan 23 11:51:45.283: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182275927s Jan 23 11:51:47.473: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.372521322s Jan 23 11:51:49.496: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.394902358s STEP: Saw pod success Jan 23 11:51:49.496: INFO: Pod "pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:51:49.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 23 11:51:49.845: INFO: Waiting for pod pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005 to disappear Jan 23 11:51:49.945: INFO: Pod pod-projected-secrets-b6ffea8f-3dd6-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:51:49.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6xjxs" for this suite. Jan 23 11:51:56.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:51:56.151: INFO: namespace: e2e-tests-projected-6xjxs, resource: bindings, ignored listing per whitelist Jan 23 11:51:56.210: INFO: namespace e2e-tests-projected-6xjxs deletion completed in 6.251162786s • [SLOW TEST:17.326 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:51:56.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 23 11:51:56.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:51:59.738: INFO: stderr: "" Jan 23 11:51:59.739: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 11:51:59.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:00.025: INFO: stderr: "" Jan 23 11:52:00.025: INFO: stdout: "update-demo-nautilus-nf8lx update-demo-nautilus-vcb2t " Jan 23 11:52:00.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf8lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:00.257: INFO: stderr: "" Jan 23 11:52:00.257: INFO: stdout: "" Jan 23 11:52:00.258: INFO: update-demo-nautilus-nf8lx is created but not running Jan 23 11:52:05.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:05.447: INFO: stderr: "" Jan 23 11:52:05.447: INFO: stdout: "update-demo-nautilus-nf8lx update-demo-nautilus-vcb2t " Jan 23 11:52:05.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf8lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:05.588: INFO: stderr: "" Jan 23 11:52:05.588: INFO: stdout: "" Jan 23 11:52:05.588: INFO: update-demo-nautilus-nf8lx is created but not running Jan 23 11:52:10.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:10.772: INFO: stderr: "" Jan 23 11:52:10.772: INFO: stdout: "update-demo-nautilus-nf8lx update-demo-nautilus-vcb2t " Jan 23 11:52:10.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf8lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:10.995: INFO: stderr: "" Jan 23 11:52:10.995: INFO: stdout: "" Jan 23 11:52:10.995: INFO: update-demo-nautilus-nf8lx is created but not running Jan 23 11:52:15.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.189: INFO: stderr: "" Jan 23 11:52:16.189: INFO: stdout: "update-demo-nautilus-nf8lx update-demo-nautilus-vcb2t " Jan 23 11:52:16.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf8lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.303: INFO: stderr: "" Jan 23 11:52:16.303: INFO: stdout: "true" Jan 23 11:52:16.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf8lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.469: INFO: stderr: "" Jan 23 11:52:16.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:52:16.469: INFO: validating pod update-demo-nautilus-nf8lx Jan 23 11:52:16.514: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:52:16.514: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:52:16.514: INFO: update-demo-nautilus-nf8lx is verified up and running Jan 23 11:52:16.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcb2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.690: INFO: stderr: "" Jan 23 11:52:16.690: INFO: stdout: "true" Jan 23 11:52:16.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcb2t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.810: INFO: stderr: "" Jan 23 11:52:16.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 11:52:16.810: INFO: validating pod update-demo-nautilus-vcb2t Jan 23 11:52:16.820: INFO: got data: { "image": "nautilus.jpg" } Jan 23 11:52:16.820: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 11:52:16.820: INFO: update-demo-nautilus-vcb2t is verified up and running STEP: using delete to clean up resources Jan 23 11:52:16.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:16.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 11:52:16.954: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 23 11:52:16.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-c4bmj' Jan 23 11:52:17.149: INFO: stderr: "No resources found.\n" Jan 23 11:52:17.149: INFO: stdout: "" Jan 23 11:52:17.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-c4bmj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 11:52:17.382: INFO: stderr: "" Jan 23 11:52:17.382: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:52:17.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c4bmj" for this suite. Jan 23 11:52:41.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:52:41.661: INFO: namespace: e2e-tests-kubectl-c4bmj, resource: bindings, ignored listing per whitelist Jan 23 11:52:41.705: INFO: namespace e2e-tests-kubectl-c4bmj deletion completed in 24.303728943s • [SLOW TEST:45.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:52:41.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 11:52:41.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-skf28' Jan 23 11:52:42.169: INFO: stderr: "" Jan 23 11:52:42.169: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 23 11:52:52.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-skf28 -o json' Jan 23 11:52:52.325: INFO: stderr: "" Jan 23 11:52:52.325: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-23T11:52:42Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-skf28\",\n \"resourceVersion\": \"19184321\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-skf28/pods/e2e-test-nginx-pod\",\n \"uid\": \"dc8e481b-3dd6-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cf62r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cf62r\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cf62r\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T11:52:42Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T11:52:49Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T11:52:49Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T11:52:42Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://c497ab654855146c69679e44d35b22e860338b3d2244f9ac2df578222234104c\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-23T11:52:49Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-23T11:52:42Z\"\n }\n}\n" STEP: replace the image in the pod Jan 23 11:52:52.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-skf28' Jan 23 11:52:52.764: INFO: stderr: "" Jan 23 11:52:52.764: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 23 11:52:52.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-skf28' Jan 23 11:53:00.675: INFO: stderr: "" Jan 23 11:53:00.675: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:53:00.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-skf28" for this suite. Jan 23 11:53:06.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:53:06.869: INFO: namespace: e2e-tests-kubectl-skf28, resource: bindings, ignored listing per whitelist Jan 23 11:53:06.908: INFO: namespace e2e-tests-kubectl-skf28 deletion completed in 6.164270142s • [SLOW TEST:25.203 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:53:06.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 23 11:53:07.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6c9w7' Jan 23 11:53:07.643: INFO: stderr: "" Jan 23 11:53:07.643: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 23 11:53:08.662: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:08.662: INFO: Found 0 / 1 Jan 23 11:53:09.877: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:09.877: INFO: Found 0 / 1 Jan 23 11:53:10.670: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:10.671: INFO: Found 0 / 1 Jan 23 11:53:11.666: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:11.666: INFO: Found 0 / 1 Jan 23 11:53:12.679: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:12.679: INFO: Found 0 / 1 Jan 23 11:53:13.661: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:13.661: INFO: Found 0 / 1 Jan 23 11:53:14.694: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:14.694: INFO: Found 0 / 1 Jan 23 11:53:15.662: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:15.662: INFO: Found 0 / 1 Jan 23 11:53:16.662: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:16.662: INFO: Found 1 / 1 Jan 23 11:53:16.662: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 23 11:53:16.668: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:16.668: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 11:53:16.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-hzzcq --namespace=e2e-tests-kubectl-6c9w7 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 23 11:53:16.843: INFO: stderr: "" Jan 23 11:53:16.843: INFO: stdout: "pod/redis-master-hzzcq patched\n" STEP: checking annotations Jan 23 11:53:16.904: INFO: Selector matched 1 pods for map[app:redis] Jan 23 11:53:16.904: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:53:16.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6c9w7" for this suite. Jan 23 11:53:41.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:53:41.087: INFO: namespace: e2e-tests-kubectl-6c9w7, resource: bindings, ignored listing per whitelist Jan 23 11:53:41.139: INFO: namespace e2e-tests-kubectl-6c9w7 deletion completed in 24.221848462s • [SLOW TEST:34.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:53:41.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 23 11:53:41.924: INFO: created pod pod-service-account-defaultsa Jan 23 11:53:41.924: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 23 11:53:42.018: INFO: created pod pod-service-account-mountsa Jan 23 11:53:42.018: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 23 11:53:42.059: INFO: created pod pod-service-account-nomountsa Jan 23 11:53:42.059: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 23 11:53:42.211: INFO: created pod pod-service-account-defaultsa-mountspec Jan 23 11:53:42.211: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 23 11:53:42.414: INFO: created pod pod-service-account-mountsa-mountspec Jan 23 11:53:42.414: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 23 11:53:42.451: INFO: created pod pod-service-account-nomountsa-mountspec Jan 23 11:53:42.451: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 23 11:53:43.013: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 23 11:53:43.014: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 23 11:53:43.463: INFO: created pod pod-service-account-mountsa-nomountspec Jan 23 11:53:43.463: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 23 11:53:43.493: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 23 11:53:43.494: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:53:43.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-zwmxg" for this suite. Jan 23 11:54:11.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:54:12.073: INFO: namespace: e2e-tests-svcaccounts-zwmxg, resource: bindings, ignored listing per whitelist Jan 23 11:54:12.171: INFO: namespace e2e-tests-svcaccounts-zwmxg deletion completed in 27.193502963s • [SLOW TEST:31.031 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:54:12.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-12612ca0-3dd7-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:54:12.402: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-rc5rx" to be "success or failure" Jan 23 11:54:12.421: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.218839ms Jan 23 11:54:14.727: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324574793s Jan 23 11:54:16.746: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343265935s Jan 23 11:54:18.773: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370589438s Jan 23 11:54:20.786: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.383506232s Jan 23 11:54:22.843: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.44020536s STEP: Saw pod success Jan 23 11:54:22.843: INFO: Pod "pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:54:22.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 11:54:23.017: INFO: Waiting for pod pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:54:23.098: INFO: Pod pod-projected-configmaps-12621299-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:54:23.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rc5rx" for this suite. Jan 23 11:54:29.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:54:29.351: INFO: namespace: e2e-tests-projected-rc5rx, resource: bindings, ignored listing per whitelist Jan 23 11:54:29.523: INFO: namespace e2e-tests-projected-rc5rx deletion completed in 6.378736214s • [SLOW TEST:17.352 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:54:29.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 11:54:29.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-h6v9s' Jan 23 11:54:29.934: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 11:54:29.934: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 23 11:54:30.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-h6v9s' Jan 23 11:54:30.315: INFO: stderr: "" Jan 23 11:54:30.315: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:54:30.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h6v9s" for this suite. Jan 23 11:54:38.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:54:38.717: INFO: namespace: e2e-tests-kubectl-h6v9s, resource: bindings, ignored listing per whitelist Jan 23 11:54:38.744: INFO: namespace e2e-tests-kubectl-h6v9s deletion completed in 8.41352537s • [SLOW TEST:9.220 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:54:38.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-484h STEP: Creating a pod to test atomic-volume-subpath Jan 23 11:54:39.008: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-484h" in namespace "e2e-tests-subpath-w9ns4" to be "success or failure" Jan 23 11:54:39.171: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 162.274676ms Jan 23 11:54:41.418: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409803977s Jan 23 11:54:43.432: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42340652s Jan 23 11:54:45.578: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569837297s Jan 23 11:54:47.625: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.615979026s Jan 23 11:54:49.803: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.794483714s Jan 23 11:54:52.296: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 13.287644324s Jan 23 11:54:54.348: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.339537144s Jan 23 11:54:56.359: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 17.350496481s Jan 23 11:54:58.380: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 19.371073487s Jan 23 11:55:00.395: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 21.38626296s Jan 23 11:55:02.412: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 23.403622645s Jan 23 11:55:04.428: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 25.418993334s Jan 23 11:55:06.447: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 27.437937565s Jan 23 11:55:08.658: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 29.648965752s Jan 23 11:55:10.672: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 31.663930512s Jan 23 11:55:12.691: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Running", Reason="", readiness=false. Elapsed: 33.682686081s Jan 23 11:55:14.720: INFO: Pod "pod-subpath-test-configmap-484h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.711356982s STEP: Saw pod success Jan 23 11:55:14.720: INFO: Pod "pod-subpath-test-configmap-484h" satisfied condition "success or failure" Jan 23 11:55:14.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-484h container test-container-subpath-configmap-484h: STEP: delete the pod Jan 23 11:55:14.973: INFO: Waiting for pod pod-subpath-test-configmap-484h to disappear Jan 23 11:55:14.994: INFO: Pod pod-subpath-test-configmap-484h no longer exists STEP: Deleting pod pod-subpath-test-configmap-484h Jan 23 11:55:14.994: INFO: Deleting pod "pod-subpath-test-configmap-484h" in namespace "e2e-tests-subpath-w9ns4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:55:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-w9ns4" for this suite. Jan 23 11:55:23.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:55:23.270: INFO: namespace: e2e-tests-subpath-w9ns4, resource: bindings, ignored listing per whitelist Jan 23 11:55:23.309: INFO: namespace e2e-tests-subpath-w9ns4 deletion completed in 8.295367422s • [SLOW TEST:44.563 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:55:23.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3ccabfa0-3dd7-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 11:55:23.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-57mt6" to be "success or failure" Jan 23 11:55:23.613: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.105631ms Jan 23 11:55:25.626: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030983397s Jan 23 11:55:27.641: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045980701s Jan 23 11:55:29.663: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067653616s Jan 23 11:55:31.903: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.307882514s Jan 23 11:55:34.309: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.71347739s STEP: Saw pod success Jan 23 11:55:34.309: INFO: Pod "pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:55:34.352: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 23 11:55:34.480: INFO: Waiting for pod pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:55:34.487: INFO: Pod pod-configmaps-3ccbcbd1-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:55:34.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-57mt6" for this suite. Jan 23 11:55:40.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:55:40.592: INFO: namespace: e2e-tests-configmap-57mt6, resource: bindings, ignored listing per whitelist Jan 23 11:55:40.987: INFO: namespace e2e-tests-configmap-57mt6 deletion completed in 6.487414613s • [SLOW TEST:17.678 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:55:40.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jan 23 11:55:41.192: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix501664594/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:55:41.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-52nv8" for this suite. Jan 23 11:55:47.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:55:47.557: INFO: namespace: e2e-tests-kubectl-52nv8, resource: bindings, ignored listing per whitelist Jan 23 11:55:47.626: INFO: namespace e2e-tests-kubectl-52nv8 deletion completed in 6.255370818s • [SLOW TEST:6.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:55:47.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 23 11:55:47.801: INFO: Waiting up to 5m0s for pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-containers-mbwkx" to be "success or failure" Jan 23 11:55:47.811: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.155953ms Jan 23 11:55:49.829: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027314187s Jan 23 11:55:51.919: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117985266s Jan 23 11:55:54.264: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462903604s Jan 23 11:55:56.386: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.58504691s Jan 23 11:55:58.517: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.715826297s STEP: Saw pod success Jan 23 11:55:58.517: INFO: Pod "client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:55:58.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:55:58.767: INFO: Waiting for pod client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:55:58.785: INFO: Pod client-containers-4b3e4129-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:55:58.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mbwkx" for this suite. Jan 23 11:56:04.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:56:05.030: INFO: namespace: e2e-tests-containers-mbwkx, resource: bindings, ignored listing per whitelist Jan 23 11:56:05.125: INFO: namespace e2e-tests-containers-mbwkx deletion completed in 6.330507768s • [SLOW TEST:17.498 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:56:05.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-55b0b45d-3dd7-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:56:05.410: INFO: Waiting up to 5m0s for pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-p8cqt" to be "success or failure" Jan 23 11:56:05.430: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.29813ms Jan 23 11:56:07.441: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030332502s Jan 23 11:56:09.451: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040896873s Jan 23 11:56:11.632: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221787517s Jan 23 11:56:13.650: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23990481s Jan 23 11:56:15.676: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.265149973s STEP: Saw pod success Jan 23 11:56:15.676: INFO: Pod "pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:56:15.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 11:56:15.784: INFO: Waiting for pod pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:56:15.797: INFO: Pod pod-secrets-55b39ca4-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:56:15.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-p8cqt" for this suite. Jan 23 11:56:22.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:56:22.215: INFO: namespace: e2e-tests-secrets-p8cqt, resource: bindings, ignored listing per whitelist Jan 23 11:56:22.232: INFO: namespace e2e-tests-secrets-p8cqt deletion completed in 6.252234261s • [SLOW TEST:17.107 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:56:22.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 23 11:56:22.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 11:56:22.594: INFO: Waiting for terminating namespaces to be deleted... Jan 23 11:56:22.622: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 23 11:56:22.668: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:56:22.669: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 11:56:22.669: INFO: Container coredns ready: true, restart count 0 Jan 23 11:56:22.669: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 23 11:56:22.669: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 11:56:22.669: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:56:22.669: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 23 11:56:22.669: INFO: Container weave ready: true, restart count 0 Jan 23 11:56:22.669: INFO: Container weave-npc ready: true, restart count 0 Jan 23 11:56:22.669: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 23 11:56:22.669: INFO: Container coredns ready: true, restart count 0 Jan 23 11:56:22.669: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 23 11:56:22.669: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 23 11:56:22.842: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6023792f-3dd7-11ea-bb65-0242ac110005.15ec8211f14a04ce], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-s274h/filler-pod-6023792f-3dd7-11ea-bb65-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-6023792f-3dd7-11ea-bb65-0242ac110005.15ec821301e890e4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6023792f-3dd7-11ea-bb65-0242ac110005.15ec821394f42ab4], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6023792f-3dd7-11ea-bb65-0242ac110005.15ec8213ca483751], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ec821448757bd1], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:56:34.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-s274h" for this suite. Jan 23 11:56:40.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:56:40.477: INFO: namespace: e2e-tests-sched-pred-s274h, resource: bindings, ignored listing per whitelist Jan 23 11:56:40.489: INFO: namespace e2e-tests-sched-pred-s274h deletion completed in 6.239643098s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.256 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:56:40.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0123 11:57:13.284656 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 11:57:13.284: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:57:13.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p9rkj" for this suite. Jan 23 11:57:21.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:57:21.452: INFO: namespace: e2e-tests-gc-p9rkj, resource: bindings, ignored listing per whitelist Jan 23 11:57:21.497: INFO: namespace e2e-tests-gc-p9rkj deletion completed in 8.205301435s • [SLOW TEST:41.006 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:57:21.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:57:28.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-ftft4" for this suite. Jan 23 11:57:34.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:57:34.320: INFO: namespace: e2e-tests-namespaces-ftft4, resource: bindings, ignored listing per whitelist Jan 23 11:57:34.431: INFO: namespace e2e-tests-namespaces-ftft4 deletion completed in 6.235746466s STEP: Destroying namespace "e2e-tests-nsdeletetest-pq8sx" for this suite. Jan 23 11:57:34.436: INFO: Namespace e2e-tests-nsdeletetest-pq8sx was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-85wdr" for this suite. Jan 23 11:57:40.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:57:40.735: INFO: namespace: e2e-tests-nsdeletetest-85wdr, resource: bindings, ignored listing per whitelist Jan 23 11:57:40.758: INFO: namespace e2e-tests-nsdeletetest-85wdr deletion completed in 6.322401907s • [SLOW TEST:19.260 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:57:40.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 23 11:57:53.553: INFO: Successfully updated pod "annotationupdate8eb115a6-3dd7-11ea-bb65-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:57:55.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nv65g" for this suite. Jan 23 11:58:19.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:58:19.868: INFO: namespace: e2e-tests-downward-api-nv65g, resource: bindings, ignored listing per whitelist Jan 23 11:58:19.875: INFO: namespace e2e-tests-downward-api-nv65g deletion completed in 24.19436274s • [SLOW TEST:39.116 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:58:19.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a5fb65ad-3dd7-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:58:20.032: INFO: Waiting up to 5m0s for pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-7b2gk" to be "success or failure" Jan 23 11:58:20.040: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.435106ms Jan 23 11:58:22.062: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029768837s Jan 23 11:58:24.077: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04422046s Jan 23 11:58:26.101: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068944747s Jan 23 11:58:28.118: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08553309s Jan 23 11:58:30.149: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116846643s STEP: Saw pod success Jan 23 11:58:30.149: INFO: Pod "pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:58:30.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005 container secret-env-test: STEP: delete the pod Jan 23 11:58:30.303: INFO: Waiting for pod pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:58:30.332: INFO: Pod pod-secrets-a5fbf696-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:58:30.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7b2gk" for this suite. Jan 23 11:58:36.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:58:36.571: INFO: namespace: e2e-tests-secrets-7b2gk, resource: bindings, ignored listing per whitelist Jan 23 11:58:36.641: INFO: namespace e2e-tests-secrets-7b2gk deletion completed in 6.29138419s • [SLOW TEST:16.766 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:58:36.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 23 11:58:36.829: INFO: Waiting up to 5m0s for pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-zqdm4" to be "success or failure" Jan 23 11:58:36.853: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.290441ms Jan 23 11:58:38.875: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046126607s Jan 23 11:58:40.891: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061461089s Jan 23 11:58:42.912: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082804796s Jan 23 11:58:44.930: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100365656s Jan 23 11:58:47.940: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.110734828s STEP: Saw pod success Jan 23 11:58:47.940: INFO: Pod "pod-aff7a665-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:58:48.398: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aff7a665-3dd7-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 11:58:48.752: INFO: Waiting for pod pod-aff7a665-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:58:48.768: INFO: Pod pod-aff7a665-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:58:48.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zqdm4" for this suite. Jan 23 11:58:54.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:58:55.002: INFO: namespace: e2e-tests-emptydir-zqdm4, resource: bindings, ignored listing per whitelist Jan 23 11:58:55.070: INFO: namespace e2e-tests-emptydir-zqdm4 deletion completed in 6.291396727s • [SLOW TEST:18.428 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:58:55.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-bb15b088-3dd7-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 11:58:55.455: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-qxs6v" to be "success or failure" Jan 23 11:58:55.621: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 166.24208ms Jan 23 11:58:57.631: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176659166s Jan 23 11:58:59.653: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198334502s Jan 23 11:59:01.667: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212178898s Jan 23 11:59:03.762: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307315091s Jan 23 11:59:05.775: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.320591171s STEP: Saw pod success Jan 23 11:59:05.776: INFO: Pod "pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 11:59:05.784: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 23 11:59:05.884: INFO: Waiting for pod pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005 to disappear Jan 23 11:59:06.039: INFO: Pod pod-projected-secrets-bb17acb4-3dd7-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:59:06.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qxs6v" for this suite. Jan 23 11:59:12.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:59:12.175: INFO: namespace: e2e-tests-projected-qxs6v, resource: bindings, ignored listing per whitelist Jan 23 11:59:12.254: INFO: namespace e2e-tests-projected-qxs6v deletion completed in 6.197354086s • [SLOW TEST:17.183 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:59:12.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 23 11:59:12.410: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 11:59:28.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wzt95" for this suite. Jan 23 11:59:36.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 11:59:36.912: INFO: namespace: e2e-tests-init-container-wzt95, resource: bindings, ignored listing per whitelist Jan 23 11:59:37.029: INFO: namespace e2e-tests-init-container-wzt95 deletion completed in 8.313431798s • [SLOW TEST:24.775 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 11:59:37.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5fcrk [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5fcrk STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5fcrk Jan 23 11:59:37.329: INFO: Found 0 stateful pods, waiting for 1 Jan 23 11:59:47.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 23 11:59:57.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 23 11:59:57.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 11:59:58.021: INFO: stderr: "I0123 11:59:57.602158 2174 log.go:172] (0xc0005b8370) (0xc00073c640) Create stream\nI0123 11:59:57.602360 2174 log.go:172] (0xc0005b8370) (0xc00073c640) Stream added, broadcasting: 1\nI0123 11:59:57.609009 2174 log.go:172] (0xc0005b8370) Reply frame received for 1\nI0123 11:59:57.609062 2174 log.go:172] (0xc0005b8370) (0xc000016be0) Create stream\nI0123 11:59:57.609075 2174 log.go:172] (0xc0005b8370) (0xc000016be0) Stream added, broadcasting: 3\nI0123 11:59:57.610518 2174 log.go:172] (0xc0005b8370) Reply frame received for 3\nI0123 11:59:57.610564 2174 log.go:172] (0xc0005b8370) (0xc0004ea000) Create stream\nI0123 11:59:57.610606 2174 log.go:172] (0xc0005b8370) (0xc0004ea000) Stream added, broadcasting: 5\nI0123 11:59:57.611900 2174 log.go:172] (0xc0005b8370) Reply frame received for 5\nI0123 11:59:57.859531 2174 log.go:172] (0xc0005b8370) Data frame received for 3\nI0123 11:59:57.859581 2174 log.go:172] (0xc000016be0) (3) Data frame handling\nI0123 11:59:57.859620 2174 log.go:172] (0xc000016be0) (3) Data frame sent\nI0123 11:59:58.007778 2174 log.go:172] (0xc0005b8370) Data frame received for 1\nI0123 11:59:58.007954 2174 log.go:172] (0xc0005b8370) (0xc000016be0) Stream removed, broadcasting: 3\nI0123 11:59:58.008165 2174 log.go:172] (0xc00073c640) (1) Data frame handling\nI0123 11:59:58.008201 2174 log.go:172] (0xc00073c640) (1) Data frame sent\nI0123 11:59:58.008213 2174 log.go:172] (0xc0005b8370) (0xc00073c640) Stream removed, broadcasting: 1\nI0123 11:59:58.008300 2174 log.go:172] (0xc0005b8370) (0xc0004ea000) Stream removed, broadcasting: 5\nI0123 11:59:58.008940 2174 log.go:172] (0xc0005b8370) Go away received\nI0123 11:59:58.009063 2174 log.go:172] (0xc0005b8370) (0xc00073c640) Stream removed, broadcasting: 1\nI0123 11:59:58.009110 2174 log.go:172] (0xc0005b8370) (0xc000016be0) Stream removed, broadcasting: 3\nI0123 11:59:58.009125 2174 log.go:172] (0xc0005b8370) (0xc0004ea000) Stream removed, broadcasting: 5\n" Jan 23 11:59:58.021: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 11:59:58.021: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 11:59:58.042: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 12:00:08.076: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:00:08.076: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:00:08.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999443s Jan 23 12:00:09.275: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.847075484s Jan 23 12:00:10.292: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.83309889s Jan 23 12:00:11.318: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.816177248s Jan 23 12:00:12.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.7901922s Jan 23 12:00:13.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.761691503s Jan 23 12:00:15.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.731202407s Jan 23 12:00:16.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.034834419s Jan 23 12:00:17.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.017324454s Jan 23 12:00:18.123: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.193888ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5fcrk Jan 23 12:00:19.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:00:19.693: INFO: stderr: "I0123 12:00:19.373422 2196 log.go:172] (0xc00072a370) (0xc0005cb4a0) Create stream\nI0123 12:00:19.374183 2196 log.go:172] (0xc00072a370) (0xc0005cb4a0) Stream added, broadcasting: 1\nI0123 12:00:19.383991 2196 log.go:172] (0xc00072a370) Reply frame received for 1\nI0123 12:00:19.384039 2196 log.go:172] (0xc00072a370) (0xc00041e000) Create stream\nI0123 12:00:19.384053 2196 log.go:172] (0xc00072a370) (0xc00041e000) Stream added, broadcasting: 3\nI0123 12:00:19.385548 2196 log.go:172] (0xc00072a370) Reply frame received for 3\nI0123 12:00:19.385582 2196 log.go:172] (0xc00072a370) (0xc0005cb540) Create stream\nI0123 12:00:19.385594 2196 log.go:172] (0xc00072a370) (0xc0005cb540) Stream added, broadcasting: 5\nI0123 12:00:19.387662 2196 log.go:172] (0xc00072a370) Reply frame received for 5\nI0123 12:00:19.527280 2196 log.go:172] (0xc00072a370) Data frame received for 3\nI0123 12:00:19.527344 2196 log.go:172] (0xc00041e000) (3) Data frame handling\nI0123 12:00:19.527367 2196 log.go:172] (0xc00041e000) (3) Data frame sent\nI0123 12:00:19.680445 2196 log.go:172] (0xc00072a370) Data frame received for 1\nI0123 12:00:19.680540 2196 log.go:172] (0xc00072a370) (0xc00041e000) Stream removed, broadcasting: 3\nI0123 12:00:19.680654 2196 log.go:172] (0xc0005cb4a0) (1) Data frame handling\nI0123 12:00:19.680705 2196 log.go:172] (0xc0005cb4a0) (1) Data frame sent\nI0123 12:00:19.680718 2196 log.go:172] (0xc00072a370) (0xc0005cb540) Stream removed, broadcasting: 5\nI0123 12:00:19.680798 2196 log.go:172] (0xc00072a370) (0xc0005cb4a0) Stream removed, broadcasting: 1\nI0123 12:00:19.681240 2196 log.go:172] (0xc00072a370) (0xc0005cb4a0) Stream removed, broadcasting: 1\nI0123 12:00:19.681252 2196 log.go:172] (0xc00072a370) (0xc00041e000) Stream removed, broadcasting: 3\nI0123 12:00:19.681258 2196 log.go:172] (0xc00072a370) (0xc0005cb540) Stream removed, broadcasting: 5\nI0123 12:00:19.681692 2196 log.go:172] (0xc00072a370) Go away received\n" Jan 23 12:00:19.693: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:00:19.693: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:00:19.705: INFO: Found 1 stateful pods, waiting for 3 Jan 23 12:00:30.129: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:00:30.129: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:00:30.129: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 12:00:39.743: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:00:39.743: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:00:39.743: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 23 12:00:39.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:00:40.312: INFO: stderr: "I0123 12:00:39.989554 2219 log.go:172] (0xc0008042c0) (0xc000700640) Create stream\nI0123 12:00:39.989679 2219 log.go:172] (0xc0008042c0) (0xc000700640) Stream added, broadcasting: 1\nI0123 12:00:39.994871 2219 log.go:172] (0xc0008042c0) Reply frame received for 1\nI0123 12:00:39.994922 2219 log.go:172] (0xc0008042c0) (0xc000570dc0) Create stream\nI0123 12:00:39.994935 2219 log.go:172] (0xc0008042c0) (0xc000570dc0) Stream added, broadcasting: 3\nI0123 12:00:39.996037 2219 log.go:172] (0xc0008042c0) Reply frame received for 3\nI0123 12:00:39.996074 2219 log.go:172] (0xc0008042c0) (0xc000560000) Create stream\nI0123 12:00:39.996086 2219 log.go:172] (0xc0008042c0) (0xc000560000) Stream added, broadcasting: 5\nI0123 12:00:39.997269 2219 log.go:172] (0xc0008042c0) Reply frame received for 5\nI0123 12:00:40.132119 2219 log.go:172] (0xc0008042c0) Data frame received for 3\nI0123 12:00:40.132247 2219 log.go:172] (0xc000570dc0) (3) Data frame handling\nI0123 12:00:40.132553 2219 log.go:172] (0xc000570dc0) (3) Data frame sent\nI0123 12:00:40.301772 2219 log.go:172] (0xc0008042c0) (0xc000570dc0) Stream removed, broadcasting: 3\nI0123 12:00:40.301951 2219 log.go:172] (0xc0008042c0) Data frame received for 1\nI0123 12:00:40.301974 2219 log.go:172] (0xc000700640) (1) Data frame handling\nI0123 12:00:40.301994 2219 log.go:172] (0xc000700640) (1) Data frame sent\nI0123 12:00:40.302098 2219 log.go:172] (0xc0008042c0) (0xc000700640) Stream removed, broadcasting: 1\nI0123 12:00:40.302187 2219 log.go:172] (0xc0008042c0) (0xc000560000) Stream removed, broadcasting: 5\nI0123 12:00:40.302223 2219 log.go:172] (0xc0008042c0) Go away received\nI0123 12:00:40.302543 2219 log.go:172] (0xc0008042c0) (0xc000700640) Stream removed, broadcasting: 1\nI0123 12:00:40.302621 2219 log.go:172] (0xc0008042c0) (0xc000570dc0) Stream removed, broadcasting: 3\nI0123 12:00:40.302636 2219 log.go:172] (0xc0008042c0) (0xc000560000) Stream removed, broadcasting: 5\n" Jan 23 12:00:40.312: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:00:40.312: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:00:40.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:00:40.977: INFO: stderr: "I0123 12:00:40.621434 2242 log.go:172] (0xc0006d42c0) (0xc000702780) Create stream\nI0123 12:00:40.621902 2242 log.go:172] (0xc0006d42c0) (0xc000702780) Stream added, broadcasting: 1\nI0123 12:00:40.636917 2242 log.go:172] (0xc0006d42c0) Reply frame received for 1\nI0123 12:00:40.637007 2242 log.go:172] (0xc0006d42c0) (0xc000702820) Create stream\nI0123 12:00:40.637021 2242 log.go:172] (0xc0006d42c0) (0xc000702820) Stream added, broadcasting: 3\nI0123 12:00:40.638938 2242 log.go:172] (0xc0006d42c0) Reply frame received for 3\nI0123 12:00:40.638964 2242 log.go:172] (0xc0006d42c0) (0xc0007d6820) Create stream\nI0123 12:00:40.638989 2242 log.go:172] (0xc0006d42c0) (0xc0007d6820) Stream added, broadcasting: 5\nI0123 12:00:40.640762 2242 log.go:172] (0xc0006d42c0) Reply frame received for 5\nI0123 12:00:40.846455 2242 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0123 12:00:40.846503 2242 log.go:172] (0xc000702820) (3) Data frame handling\nI0123 12:00:40.846527 2242 log.go:172] (0xc000702820) (3) Data frame sent\nI0123 12:00:40.969082 2242 log.go:172] (0xc0006d42c0) (0xc000702820) Stream removed, broadcasting: 3\nI0123 12:00:40.969300 2242 log.go:172] (0xc0006d42c0) Data frame received for 1\nI0123 12:00:40.969313 2242 log.go:172] (0xc000702780) (1) Data frame handling\nI0123 12:00:40.969334 2242 log.go:172] (0xc000702780) (1) Data frame sent\nI0123 12:00:40.969344 2242 log.go:172] (0xc0006d42c0) (0xc000702780) Stream removed, broadcasting: 1\nI0123 12:00:40.969391 2242 log.go:172] (0xc0006d42c0) (0xc0007d6820) Stream removed, broadcasting: 5\nI0123 12:00:40.969479 2242 log.go:172] (0xc0006d42c0) Go away received\nI0123 12:00:40.969664 2242 log.go:172] (0xc0006d42c0) (0xc000702780) Stream removed, broadcasting: 1\nI0123 12:00:40.969674 2242 log.go:172] (0xc0006d42c0) (0xc000702820) Stream removed, broadcasting: 3\nI0123 12:00:40.969680 2242 log.go:172] (0xc0006d42c0) (0xc0007d6820) Stream removed, broadcasting: 5\n" Jan 23 12:00:40.978: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:00:40.978: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:00:40.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:00:41.600: INFO: stderr: "I0123 12:00:41.161534 2264 log.go:172] (0xc000724370) (0xc00078c640) Create stream\nI0123 12:00:41.162072 2264 log.go:172] (0xc000724370) (0xc00078c640) Stream added, broadcasting: 1\nI0123 12:00:41.180986 2264 log.go:172] (0xc000724370) Reply frame received for 1\nI0123 12:00:41.181077 2264 log.go:172] (0xc000724370) (0xc000686be0) Create stream\nI0123 12:00:41.181090 2264 log.go:172] (0xc000724370) (0xc000686be0) Stream added, broadcasting: 3\nI0123 12:00:41.184018 2264 log.go:172] (0xc000724370) Reply frame received for 3\nI0123 12:00:41.184039 2264 log.go:172] (0xc000724370) (0xc00078c6e0) Create stream\nI0123 12:00:41.184048 2264 log.go:172] (0xc000724370) (0xc00078c6e0) Stream added, broadcasting: 5\nI0123 12:00:41.187674 2264 log.go:172] (0xc000724370) Reply frame received for 5\nI0123 12:00:41.433278 2264 log.go:172] (0xc000724370) Data frame received for 3\nI0123 12:00:41.433339 2264 log.go:172] (0xc000686be0) (3) Data frame handling\nI0123 12:00:41.433362 2264 log.go:172] (0xc000686be0) (3) Data frame sent\nI0123 12:00:41.592657 2264 log.go:172] (0xc000724370) (0xc000686be0) Stream removed, broadcasting: 3\nI0123 12:00:41.592916 2264 log.go:172] (0xc000724370) Data frame received for 1\nI0123 12:00:41.592926 2264 log.go:172] (0xc00078c640) (1) Data frame handling\nI0123 12:00:41.592938 2264 log.go:172] (0xc00078c640) (1) Data frame sent\nI0123 12:00:41.592942 2264 log.go:172] (0xc000724370) (0xc00078c640) Stream removed, broadcasting: 1\nI0123 12:00:41.593365 2264 log.go:172] (0xc000724370) (0xc00078c6e0) Stream removed, broadcasting: 5\nI0123 12:00:41.593443 2264 log.go:172] (0xc000724370) Go away received\nI0123 12:00:41.593575 2264 log.go:172] (0xc000724370) (0xc00078c640) Stream removed, broadcasting: 1\nI0123 12:00:41.593608 2264 log.go:172] (0xc000724370) (0xc000686be0) Stream removed, broadcasting: 3\nI0123 12:00:41.593633 2264 log.go:172] (0xc000724370) (0xc00078c6e0) Stream removed, broadcasting: 5\n" Jan 23 12:00:41.600: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:00:41.600: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:00:41.600: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:00:41.610: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 23 12:00:51.625: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:00:51.625: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:00:51.625: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:00:51.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999808s Jan 23 12:00:52.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984417775s Jan 23 12:00:53.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.929924044s Jan 23 12:00:54.765: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.90888933s Jan 23 12:00:55.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.872054539s Jan 23 12:00:56.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.852707826s Jan 23 12:00:57.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.821416065s Jan 23 12:00:58.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.799767783s Jan 23 12:00:59.901: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.773002717s Jan 23 12:01:00.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 735.94373ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5fcrk Jan 23 12:01:02.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:02.833: INFO: stderr: "I0123 12:01:02.185191 2285 log.go:172] (0xc0001386e0) (0xc000683400) Create stream\nI0123 12:01:02.185449 2285 log.go:172] (0xc0001386e0) (0xc000683400) Stream added, broadcasting: 1\nI0123 12:01:02.191208 2285 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0123 12:01:02.191276 2285 log.go:172] (0xc0001386e0) (0xc0006f2000) Create stream\nI0123 12:01:02.191285 2285 log.go:172] (0xc0001386e0) (0xc0006f2000) Stream added, broadcasting: 3\nI0123 12:01:02.192470 2285 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0123 12:01:02.192496 2285 log.go:172] (0xc0001386e0) (0xc000344000) Create stream\nI0123 12:01:02.192506 2285 log.go:172] (0xc0001386e0) (0xc000344000) Stream added, broadcasting: 5\nI0123 12:01:02.193482 2285 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0123 12:01:02.329168 2285 log.go:172] (0xc0001386e0) Data frame received for 3\nI0123 12:01:02.329266 2285 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0123 12:01:02.329287 2285 log.go:172] (0xc0006f2000) (3) Data frame sent\nI0123 12:01:02.817599 2285 log.go:172] (0xc0001386e0) (0xc0006f2000) Stream removed, broadcasting: 3\nI0123 12:01:02.817830 2285 log.go:172] (0xc0001386e0) Data frame received for 1\nI0123 12:01:02.817867 2285 log.go:172] (0xc000683400) (1) Data frame handling\nI0123 12:01:02.817887 2285 log.go:172] (0xc000683400) (1) Data frame sent\nI0123 12:01:02.817902 2285 log.go:172] (0xc0001386e0) (0xc000683400) Stream removed, broadcasting: 1\nI0123 12:01:02.818033 2285 log.go:172] (0xc0001386e0) (0xc000344000) Stream removed, broadcasting: 5\nI0123 12:01:02.818158 2285 log.go:172] (0xc0001386e0) Go away received\nI0123 12:01:02.818416 2285 log.go:172] (0xc0001386e0) (0xc000683400) Stream removed, broadcasting: 1\nI0123 12:01:02.818432 2285 log.go:172] (0xc0001386e0) (0xc0006f2000) Stream removed, broadcasting: 3\nI0123 12:01:02.818438 2285 log.go:172] (0xc0001386e0) (0xc000344000) Stream removed, broadcasting: 5\n" Jan 23 12:01:02.833: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:01:02.833: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:01:02.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:03.611: INFO: stderr: "I0123 12:01:03.156905 2307 log.go:172] (0xc0008782c0) (0xc0005a5360) Create stream\nI0123 12:01:03.157164 2307 log.go:172] (0xc0008782c0) (0xc0005a5360) Stream added, broadcasting: 1\nI0123 12:01:03.164185 2307 log.go:172] (0xc0008782c0) Reply frame received for 1\nI0123 12:01:03.164230 2307 log.go:172] (0xc0008782c0) (0xc00071a000) Create stream\nI0123 12:01:03.164238 2307 log.go:172] (0xc0008782c0) (0xc00071a000) Stream added, broadcasting: 3\nI0123 12:01:03.165166 2307 log.go:172] (0xc0008782c0) Reply frame received for 3\nI0123 12:01:03.165185 2307 log.go:172] (0xc0008782c0) (0xc0005a5400) Create stream\nI0123 12:01:03.165195 2307 log.go:172] (0xc0008782c0) (0xc0005a5400) Stream added, broadcasting: 5\nI0123 12:01:03.168194 2307 log.go:172] (0xc0008782c0) Reply frame received for 5\nI0123 12:01:03.309833 2307 log.go:172] (0xc0008782c0) Data frame received for 3\nI0123 12:01:03.309969 2307 log.go:172] (0xc00071a000) (3) Data frame handling\nI0123 12:01:03.310028 2307 log.go:172] (0xc00071a000) (3) Data frame sent\nI0123 12:01:03.592952 2307 log.go:172] (0xc0008782c0) Data frame received for 1\nI0123 12:01:03.593208 2307 log.go:172] (0xc0005a5360) (1) Data frame handling\nI0123 12:01:03.593280 2307 log.go:172] (0xc0005a5360) (1) Data frame sent\nI0123 12:01:03.597075 2307 log.go:172] (0xc0008782c0) (0xc0005a5400) Stream removed, broadcasting: 5\nI0123 12:01:03.597266 2307 log.go:172] (0xc0008782c0) (0xc0005a5360) Stream removed, broadcasting: 1\nI0123 12:01:03.597670 2307 log.go:172] (0xc0008782c0) (0xc00071a000) Stream removed, broadcasting: 3\nI0123 12:01:03.597823 2307 log.go:172] (0xc0008782c0) Go away received\nI0123 12:01:03.597902 2307 log.go:172] (0xc0008782c0) (0xc0005a5360) Stream removed, broadcasting: 1\nI0123 12:01:03.597922 2307 log.go:172] (0xc0008782c0) (0xc00071a000) Stream removed, broadcasting: 3\nI0123 12:01:03.597933 2307 log.go:172] (0xc0008782c0) (0xc0005a5400) Stream removed, broadcasting: 5\n" Jan 23 12:01:03.611: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:01:03.611: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:01:03.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:03.942: INFO: rc: 126 Jan 23 12:01:03.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown I0123 12:01:03.815748 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Create stream I0123 12:01:03.816053 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream added, broadcasting: 1 I0123 12:01:03.827532 2328 log.go:172] (0xc0006fa370) Reply frame received for 1 I0123 12:01:03.828021 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Create stream I0123 12:01:03.828095 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream added, broadcasting: 3 I0123 12:01:03.831751 2328 log.go:172] (0xc0006fa370) Reply frame received for 3 I0123 12:01:03.831850 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Create stream I0123 12:01:03.831917 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream added, broadcasting: 5 I0123 12:01:03.839480 2328 log.go:172] (0xc0006fa370) Reply frame received for 5 I0123 12:01:03.921049 2328 log.go:172] (0xc0006fa370) Data frame received for 3 I0123 12:01:03.921158 2328 log.go:172] (0xc0005a6d20) (3) Data frame handling I0123 12:01:03.921185 2328 log.go:172] (0xc0005a6d20) (3) Data frame sent I0123 12:01:03.928045 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream removed, broadcasting: 3 I0123 12:01:03.928263 2328 log.go:172] (0xc0006fa370) Data frame received for 1 I0123 12:01:03.928280 2328 log.go:172] (0xc0007246e0) (1) Data frame handling I0123 12:01:03.928290 2328 log.go:172] (0xc0007246e0) (1) Data frame sent I0123 12:01:03.928296 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream removed, broadcasting: 1 I0123 12:01:03.928840 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream removed, broadcasting: 5 I0123 12:01:03.928922 2328 log.go:172] (0xc0006fa370) Go away received I0123 12:01:03.929056 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream removed, broadcasting: 1 I0123 12:01:03.929152 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream removed, broadcasting: 3 I0123 12:01:03.929166 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc0014f0540 exit status 126 true [0xc00016f190 0xc00016f248 0xc00016f290] [0xc00016f190 0xc00016f248 0xc00016f290] [0xc00016f238 0xc00016f280] [0x935700 0x935700] 0xc001fd9b00 }: Command stdout: cannot exec in a stopped state: unknown stderr: I0123 12:01:03.815748 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Create stream I0123 12:01:03.816053 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream added, broadcasting: 1 I0123 12:01:03.827532 2328 log.go:172] (0xc0006fa370) Reply frame received for 1 I0123 12:01:03.828021 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Create stream I0123 12:01:03.828095 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream added, broadcasting: 3 I0123 12:01:03.831751 2328 log.go:172] (0xc0006fa370) Reply frame received for 3 I0123 12:01:03.831850 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Create stream I0123 12:01:03.831917 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream added, broadcasting: 5 I0123 12:01:03.839480 2328 log.go:172] (0xc0006fa370) Reply frame received for 5 I0123 12:01:03.921049 2328 log.go:172] (0xc0006fa370) Data frame received for 3 I0123 12:01:03.921158 2328 log.go:172] (0xc0005a6d20) (3) Data frame handling I0123 12:01:03.921185 2328 log.go:172] (0xc0005a6d20) (3) Data frame sent I0123 12:01:03.928045 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream removed, broadcasting: 3 I0123 12:01:03.928263 2328 log.go:172] (0xc0006fa370) Data frame received for 1 I0123 12:01:03.928280 2328 log.go:172] (0xc0007246e0) (1) Data frame handling I0123 12:01:03.928290 2328 log.go:172] (0xc0007246e0) (1) Data frame sent I0123 12:01:03.928296 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream removed, broadcasting: 1 I0123 12:01:03.928840 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream removed, broadcasting: 5 I0123 12:01:03.928922 2328 log.go:172] (0xc0006fa370) Go away received I0123 12:01:03.929056 2328 log.go:172] (0xc0006fa370) (0xc0007246e0) Stream removed, broadcasting: 1 I0123 12:01:03.929152 2328 log.go:172] (0xc0006fa370) (0xc0005a6d20) Stream removed, broadcasting: 3 I0123 12:01:03.929166 2328 log.go:172] (0xc0006fa370) (0xc0002c8000) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 23 12:01:13.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:14.105: INFO: rc: 1 Jan 23 12:01:14.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014f0660 exit status 1 true [0xc00016f2a0 0xc00016f2e8 0xc00016f388] [0xc00016f2a0 0xc00016f2e8 0xc00016f388] [0xc00016f2e0 0xc00016f348] [0x935700 0x935700] 0xc001ebc780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:01:24.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:24.246: INFO: rc: 1 Jan 23 12:01:24.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00187b410 exit status 1 true [0xc001610220 0xc001610238 0xc001610250] [0xc001610220 0xc001610238 0xc001610250] [0xc001610230 0xc001610248] [0x935700 0x935700] 0xc001e47ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:01:34.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:34.398: INFO: rc: 1 Jan 23 12:01:34.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00187b560 exit status 1 true [0xc001610258 0xc001610270 0xc001610288] [0xc001610258 0xc001610270 0xc001610288] [0xc001610268 0xc001610280] [0x935700 0x935700] 0xc001e47f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:01:44.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:44.586: INFO: rc: 1 Jan 23 12:01:44.587: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a1140 exit status 1 true [0xc000467108 0xc000467120 0xc000467140] [0xc000467108 0xc000467120 0xc000467140] [0xc000467118 0xc000467138] [0x935700 0x935700] 0xc001ace0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:01:54.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:01:54.724: INFO: rc: 1 Jan 23 12:01:54.724: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c1b0 exit status 1 true [0xc00000e100 0xc001610008 0xc001610038] [0xc00000e100 0xc001610008 0xc001610038] [0xc001610000 0xc001610030] [0x935700 0x935700] 0xc001ee41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:04.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:04.890: INFO: rc: 1 Jan 23 12:02:04.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c76b0 exit status 1 true [0xc000466040 0xc000466118 0xc000466198] [0xc000466040 0xc000466118 0xc000466198] [0xc000466108 0xc000466180] [0x935700 0x935700] 0xc001fd81e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:14.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:15.041: INFO: rc: 1 Jan 23 12:02:15.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d8180 exit status 1 true [0xc000cb2000 0xc000cb2018 0xc000cb2030] [0xc000cb2000 0xc000cb2018 0xc000cb2030] [0xc000cb2010 0xc000cb2028] [0x935700 0x935700] 0xc002060240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:25.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:25.202: INFO: rc: 1 Jan 23 12:02:25.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c77d0 exit status 1 true [0xc0004661a8 0xc0004661e8 0xc000466258] [0xc0004661a8 0xc0004661e8 0xc000466258] [0xc0004661d0 0xc000466228] [0x935700 0x935700] 0xc001fd99e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:35.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:35.356: INFO: rc: 1 Jan 23 12:02:35.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d82a0 exit status 1 true [0xc000cb2038 0xc000cb2050 0xc000cb2068] [0xc000cb2038 0xc000cb2050 0xc000cb2068] [0xc000cb2048 0xc000cb2060] [0x935700 0x935700] 0xc0020604e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:45.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:45.532: INFO: rc: 1 Jan 23 12:02:45.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c360 exit status 1 true [0xc001610040 0xc001610060 0xc0016100a0] [0xc001610040 0xc001610060 0xc0016100a0] [0xc001610050 0xc001610088] [0x935700 0x935700] 0xc001ee4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:02:55.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:02:55.684: INFO: rc: 1 Jan 23 12:02:55.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c7980 exit status 1 true [0xc000466278 0xc000466338 0xc0004663f0] [0xc000466278 0xc000466338 0xc0004663f0] [0xc000466328 0xc0004663c0] [0x935700 0x935700] 0xc001fd9e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:05.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:05.844: INFO: rc: 1 Jan 23 12:03:05.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c4b0 exit status 1 true [0xc0016100a8 0xc0016100c0 0xc0016100d8] [0xc0016100a8 0xc0016100c0 0xc0016100d8] [0xc0016100b8 0xc0016100d0] [0x935700 0x935700] 0xc001ee4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:15.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:15.993: INFO: rc: 1 Jan 23 12:03:15.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d8420 exit status 1 true [0xc000cb2070 0xc000cb2088 0xc000cb20a0] [0xc000cb2070 0xc000cb2088 0xc000cb20a0] [0xc000cb2080 0xc000cb2098] [0x935700 0x935700] 0xc002060780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:25.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:26.164: INFO: rc: 1 Jan 23 12:03:26.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c630 exit status 1 true [0xc0016100e0 0xc0016100f8 0xc001610110] [0xc0016100e0 0xc0016100f8 0xc001610110] [0xc0016100f0 0xc001610108] [0x935700 0x935700] 0xc002014000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:36.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:36.290: INFO: rc: 1 Jan 23 12:03:36.290: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d8540 exit status 1 true [0xc000cb20a8 0xc000cb20c0 0xc000cb20d8] [0xc000cb20a8 0xc000cb20c0 0xc000cb20d8] [0xc000cb20b8 0xc000cb20d0] [0x935700 0x935700] 0xc002060a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:46.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:46.504: INFO: rc: 1 Jan 23 12:03:46.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c750 exit status 1 true [0xc001610120 0xc001610138 0xc001610150] [0xc001610120 0xc001610138 0xc001610150] [0xc001610130 0xc001610148] [0x935700 0x935700] 0xc0020142a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:03:56.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:03:56.682: INFO: rc: 1 Jan 23 12:03:56.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000310cf0 exit status 1 true [0xc00000e100 0xc000cb2008 0xc000cb2020] [0xc00000e100 0xc000cb2008 0xc000cb2020] [0xc000cb2000 0xc000cb2018] [0x935700 0x935700] 0xc001ee41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:06.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:06.776: INFO: rc: 1 Jan 23 12:04:06.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c76e0 exit status 1 true [0xc000466040 0xc000466118 0xc000466198] [0xc000466040 0xc000466118 0xc000466198] [0xc000466108 0xc000466180] [0x935700 0x935700] 0xc002060240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:16.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:16.927: INFO: rc: 1 Jan 23 12:04:16.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c150 exit status 1 true [0xc001610000 0xc001610030 0xc001610048] [0xc001610000 0xc001610030 0xc001610048] [0xc001610028 0xc001610040] [0x935700 0x935700] 0xc001fd81e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:26.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:27.084: INFO: rc: 1 Jan 23 12:04:27.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000310e40 exit status 1 true [0xc000cb2028 0xc000cb2040 0xc000cb2058] [0xc000cb2028 0xc000cb2040 0xc000cb2058] [0xc000cb2038 0xc000cb2050] [0x935700 0x935700] 0xc001ee4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:37.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:37.237: INFO: rc: 1 Jan 23 12:04:37.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c7950 exit status 1 true [0xc0004661a8 0xc0004661e8 0xc000466258] [0xc0004661a8 0xc0004661e8 0xc000466258] [0xc0004661d0 0xc000466228] [0x935700 0x935700] 0xc0020604e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:47.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:47.402: INFO: rc: 1 Jan 23 12:04:47.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00155c300 exit status 1 true [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610080 0xc0016100a8] [0x935700 0x935700] 0xc001fd99e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:04:57.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:04:57.518: INFO: rc: 1 Jan 23 12:04:57.518: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c7bf0 exit status 1 true [0xc000466278 0xc000466338 0xc0004663f0] [0xc000466278 0xc000466338 0xc0004663f0] [0xc000466328 0xc0004663c0] [0x935700 0x935700] 0xc002060780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:07.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:07.710: INFO: rc: 1 Jan 23 12:05:07.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d8210 exit status 1 true [0xc00016e000 0xc00016ecc8 0xc00016ed70] [0xc00016e000 0xc00016ecc8 0xc00016ed70] [0xc00016ec10 0xc00016ecf8] [0x935700 0x935700] 0xc002014480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:17.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:17.878: INFO: rc: 1 Jan 23 12:05:17.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d8360 exit status 1 true [0xc00016ed80 0xc00016ee08 0xc00016ee88] [0xc00016ed80 0xc00016ee08 0xc00016ee88] [0xc00016edd8 0xc00016ee60] [0x935700 0x935700] 0xc002014720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:27.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:28.030: INFO: rc: 1 Jan 23 12:05:28.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d84b0 exit status 1 true [0xc00016eea8 0xc00016eef8 0xc00016efc8] [0xc00016eea8 0xc00016eef8 0xc00016efc8] [0xc00016eee8 0xc00016efb8] [0x935700 0x935700] 0xc001e468a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:38.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:38.209: INFO: rc: 1 Jan 23 12:05:38.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020d86c0 exit status 1 true [0xc00016efe0 0xc00016f078 0xc00016f0c0] [0xc00016efe0 0xc00016f078 0xc00016f0c0] [0xc00016f050 0xc00016f0b8] [0x935700 0x935700] 0xc001e46b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:48.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:48.385: INFO: rc: 1 Jan 23 12:05:48.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000310d20 exit status 1 true [0xc00000e318 0xc000cb2010 0xc000cb2028] [0xc00000e318 0xc000cb2010 0xc000cb2028] [0xc000cb2008 0xc000cb2020] [0x935700 0x935700] 0xc0020141e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:05:58.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:05:58.582: INFO: rc: 1 Jan 23 12:05:58.583: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0004c76b0 exit status 1 true [0xc000466040 0xc000466118 0xc000466198] [0xc000466040 0xc000466118 0xc000466198] [0xc000466108 0xc000466180] [0x935700 0x935700] 0xc001ee41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 23 12:06:08.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5fcrk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:06:08.723: INFO: rc: 1 Jan 23 12:06:08.724: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 23 12:06:08.724: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 23 12:06:08.753: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5fcrk Jan 23 12:06:08.760: INFO: Scaling statefulset ss to 0 Jan 23 12:06:08.778: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:06:08.782: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:06:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5fcrk" for this suite. Jan 23 12:06:16.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:06:17.108: INFO: namespace: e2e-tests-statefulset-5fcrk, resource: bindings, ignored listing per whitelist Jan 23 12:06:17.112: INFO: namespace e2e-tests-statefulset-5fcrk deletion completed in 8.237072329s • [SLOW TEST:400.083 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:06:17.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 23 12:06:17.245: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 23 12:06:17.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:19.484: INFO: stderr: "" Jan 23 12:06:19.484: INFO: stdout: "service/redis-slave created\n" Jan 23 12:06:19.485: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 23 12:06:19.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:19.992: INFO: stderr: "" Jan 23 12:06:19.992: INFO: stdout: "service/redis-master created\n" Jan 23 12:06:19.993: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 23 12:06:19.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:20.390: INFO: stderr: "" Jan 23 12:06:20.390: INFO: stdout: "service/frontend created\n" Jan 23 12:06:20.392: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 23 12:06:20.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:20.802: INFO: stderr: "" Jan 23 12:06:20.802: INFO: stdout: "deployment.extensions/frontend created\n" Jan 23 12:06:20.803: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 23 12:06:20.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:21.155: INFO: stderr: "" Jan 23 12:06:21.155: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 23 12:06:21.156: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 23 12:06:21.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:21.512: INFO: stderr: "" Jan 23 12:06:21.512: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 23 12:06:21.512: INFO: Waiting for all frontend pods to be Running. Jan 23 12:06:51.565: INFO: Waiting for frontend to serve content. Jan 23 12:06:51.720: INFO: Trying to add a new entry to the guestbook. Jan 23 12:06:51.781: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 23 12:06:51.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:52.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:52.158: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 23 12:06:52.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:52.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:52.595: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 12:06:52.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:52.764: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:52.764: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 12:06:52.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:52.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:52.933: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 12:06:52.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:53.332: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:53.332: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 12:06:53.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8pmg' Jan 23 12:06:53.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 12:06:53.580: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:06:53.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t8pmg" for this suite. Jan 23 12:07:43.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:07:43.912: INFO: namespace: e2e-tests-kubectl-t8pmg, resource: bindings, ignored listing per whitelist Jan 23 12:07:43.935: INFO: namespace e2e-tests-kubectl-t8pmg deletion completed in 50.266852929s • [SLOW TEST:86.822 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:07:43.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:07:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hjd5n" for this suite. Jan 23 12:08:48.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:08:48.679: INFO: namespace: e2e-tests-kubelet-test-hjd5n, resource: bindings, ignored listing per whitelist Jan 23 12:08:48.728: INFO: namespace e2e-tests-kubelet-test-hjd5n deletion completed in 54.197922601s • [SLOW TEST:64.792 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:08:48.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 12:08:48.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-65k2v" to be "success or failure" Jan 23 12:08:49.019: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.134449ms Jan 23 12:08:51.038: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058514448s Jan 23 12:08:53.059: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079322038s Jan 23 12:08:55.092: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112828021s Jan 23 12:08:57.108: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128283292s Jan 23 12:08:59.126: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146900184s STEP: Saw pod success Jan 23 12:08:59.126: INFO: Pod "downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:08:59.139: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 12:08:59.813: INFO: Waiting for pod downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005 to disappear Jan 23 12:08:59.863: INFO: Pod downwardapi-volume-1cccd3dc-3dd9-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:08:59.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-65k2v" for this suite. Jan 23 12:09:06.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:09:06.247: INFO: namespace: e2e-tests-downward-api-65k2v, resource: bindings, ignored listing per whitelist Jan 23 12:09:06.305: INFO: namespace e2e-tests-downward-api-65k2v deletion completed in 6.403885744s • [SLOW TEST:17.576 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:09:06.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 23 12:09:06.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 23 12:09:06.699: INFO: stderr: "" Jan 23 12:09:06.699: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:09:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kqkmd" for this suite. Jan 23 12:09:12.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:09:12.958: INFO: namespace: e2e-tests-kubectl-kqkmd, resource: bindings, ignored listing per whitelist Jan 23 12:09:12.972: INFO: namespace e2e-tests-kubectl-kqkmd deletion completed in 6.248354633s • [SLOW TEST:6.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:09:12.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 23 12:09:13.324: INFO: Waiting up to 5m0s for pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-kpxbz" to be "success or failure" Jan 23 12:09:13.598: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 273.581979ms Jan 23 12:09:15.610: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285523056s Jan 23 12:09:17.663: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338419876s Jan 23 12:09:19.676: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351426545s Jan 23 12:09:21.688: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363806191s Jan 23 12:09:23.708: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.383251415s STEP: Saw pod success Jan 23 12:09:23.708: INFO: Pod "pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:09:23.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 12:09:24.440: INFO: Waiting for pod pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005 to disappear Jan 23 12:09:24.573: INFO: Pod pod-2b5f3cc7-3dd9-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:09:24.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kpxbz" for this suite. Jan 23 12:09:30.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:09:30.812: INFO: namespace: e2e-tests-emptydir-kpxbz, resource: bindings, ignored listing per whitelist Jan 23 12:09:30.922: INFO: namespace e2e-tests-emptydir-kpxbz deletion completed in 6.331542484s • [SLOW TEST:17.950 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:09:30.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 12:09:31.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-vpsdp" to be "success or failure" Jan 23 12:09:31.077: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.046288ms Jan 23 12:09:33.854: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798593685s Jan 23 12:09:35.880: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824001447s Jan 23 12:09:37.900: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84398081s Jan 23 12:09:39.925: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.869460614s Jan 23 12:09:42.017: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.961941148s STEP: Saw pod success Jan 23 12:09:42.018: INFO: Pod "downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:09:42.030: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 12:09:42.514: INFO: Waiting for pod downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005 to disappear Jan 23 12:09:42.531: INFO: Pod downwardapi-volume-35f1cb28-3dd9-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:09:42.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vpsdp" for this suite. Jan 23 12:09:48.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:09:48.934: INFO: namespace: e2e-tests-projected-vpsdp, resource: bindings, ignored listing per whitelist Jan 23 12:09:48.973: INFO: namespace e2e-tests-projected-vpsdp deletion completed in 6.407112854s • [SLOW TEST:18.050 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:09:48.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:09:49.400: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 23 12:09:49.443: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 23 12:09:54.619: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 23 12:09:58.666: INFO: Creating deployment "test-rolling-update-deployment" Jan 23 12:09:58.693: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 23 12:09:58.728: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 23 12:10:01.437: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 23 12:10:01.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:10:03.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:10:05.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:10:07.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715378198, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:10:09.468: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 23 12:10:09.579: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-xjf6t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xjf6t/deployments/test-rolling-update-deployment,UID:4668b526-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19186651,Generation:1,CreationTimestamp:2020-01-23 12:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-23 12:09:58 +0000 UTC 2020-01-23 12:09:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-23 12:10:07 +0000 UTC 2020-01-23 12:09:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 23 12:10:09.592: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-xjf6t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xjf6t/replicasets/test-rolling-update-deployment-75db98fb4c,UID:467dbee4-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19186642,Generation:1,CreationTimestamp:2020-01-23 12:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4668b526-3dd9-11ea-a994-fa163e34d433 0xc000eb0937 0xc000eb0938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 23 12:10:09.592: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 23 12:10:09.593: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-xjf6t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xjf6t/replicasets/test-rolling-update-controller,UID:40e27d79-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19186650,Generation:2,CreationTimestamp:2020-01-23 12:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4668b526-3dd9-11ea-a994-fa163e34d433 0xc000eb085f 0xc000eb0870}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 23 12:10:09.610: INFO: Pod "test-rolling-update-deployment-75db98fb4c-6487f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-6487f,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-xjf6t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xjf6t/pods/test-rolling-update-deployment-75db98fb4c-6487f,UID:467eecdc-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19186641,Generation:0,CreationTimestamp:2020-01-23 12:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 467dbee4-3dd9-11ea-a994-fa163e34d433 0xc000eb12a7 0xc000eb12a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fc9w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fc9w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9fc9w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000eb1310} {node.kubernetes.io/unreachable Exists NoExecute 0xc000eb1330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:09:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:10:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:10:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:09:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-23 12:09:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-23 12:10:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2dd32ee2d8aad8f989e2ed82969f7cd05d9590fac8c3b32968d77fb8d89c8c90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:10:09.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xjf6t" for this suite. Jan 23 12:10:17.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:10:17.678: INFO: namespace: e2e-tests-deployment-xjf6t, resource: bindings, ignored listing per whitelist Jan 23 12:10:17.761: INFO: namespace e2e-tests-deployment-xjf6t deletion completed in 8.141918114s • [SLOW TEST:28.788 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:10:17.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-52630e27-3dd9-11ea-bb65-0242ac110005 STEP: Creating secret with name s-test-opt-upd-52630ecb-3dd9-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-52630e27-3dd9-11ea-bb65-0242ac110005 STEP: Updating secret s-test-opt-upd-52630ecb-3dd9-11ea-bb65-0242ac110005 STEP: Creating secret with name s-test-opt-create-52630f02-3dd9-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:10:35.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6spzx" for this suite. Jan 23 12:10:59.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:10:59.369: INFO: namespace: e2e-tests-secrets-6spzx, resource: bindings, ignored listing per whitelist Jan 23 12:10:59.404: INFO: namespace e2e-tests-secrets-6spzx deletion completed in 24.240756722s • [SLOW TEST:41.643 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:10:59.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nprbp Jan 23 12:11:09.938: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nprbp STEP: checking the pod's current state and verifying that restartCount is present Jan 23 12:11:09.948: INFO: Initial restart count of pod liveness-http is 0 Jan 23 12:11:26.147: INFO: Restart count of pod e2e-tests-container-probe-nprbp/liveness-http is now 1 (16.198946432s elapsed) Jan 23 12:11:46.499: INFO: Restart count of pod e2e-tests-container-probe-nprbp/liveness-http is now 2 (36.551081496s elapsed) Jan 23 12:12:06.805: INFO: Restart count of pod e2e-tests-container-probe-nprbp/liveness-http is now 3 (56.857148619s elapsed) Jan 23 12:12:25.201: INFO: Restart count of pod e2e-tests-container-probe-nprbp/liveness-http is now 4 (1m15.252637192s elapsed) Jan 23 12:13:34.537: INFO: Restart count of pod e2e-tests-container-probe-nprbp/liveness-http is now 5 (2m24.589504764s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:13:34.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nprbp" for this suite. Jan 23 12:13:40.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:13:40.921: INFO: namespace: e2e-tests-container-probe-nprbp, resource: bindings, ignored listing per whitelist Jan 23 12:13:40.991: INFO: namespace e2e-tests-container-probe-nprbp deletion completed in 6.234667933s • [SLOW TEST:161.586 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:13:40.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-cb0df3d4-3dd9-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 12:13:41.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-m95m7" to be "success or failure" Jan 23 12:13:41.241: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.896058ms Jan 23 12:13:43.270: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044806206s Jan 23 12:13:45.284: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059110406s Jan 23 12:13:47.325: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100598267s Jan 23 12:13:49.474: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249650699s Jan 23 12:13:51.538: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313624328s STEP: Saw pod success Jan 23 12:13:51.539: INFO: Pod "pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:13:51.557: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 23 12:13:51.658: INFO: Waiting for pod pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005 to disappear Jan 23 12:13:51.728: INFO: Pod pod-configmaps-cb0eae61-3dd9-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:13:51.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m95m7" for this suite. Jan 23 12:13:57.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:13:57.859: INFO: namespace: e2e-tests-configmap-m95m7, resource: bindings, ignored listing per whitelist Jan 23 12:13:57.962: INFO: namespace e2e-tests-configmap-m95m7 deletion completed in 6.224042856s • [SLOW TEST:16.970 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:13:57.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 23 12:13:58.242: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187076,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 12:13:58.242: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187077,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 23 12:13:58.242: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187078,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 23 12:14:08.400: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187092,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 12:14:08.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187093,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 23 12:14:08.401: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2nkq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-2nkq4/configmaps/e2e-watch-test-label-changed,UID:d52edb85-3dd9-11ea-a994-fa163e34d433,ResourceVersion:19187094,Generation:0,CreationTimestamp:2020-01-23 12:13:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:14:08.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2nkq4" for this suite. Jan 23 12:14:14.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:14:14.666: INFO: namespace: e2e-tests-watch-2nkq4, resource: bindings, ignored listing per whitelist Jan 23 12:14:14.744: INFO: namespace e2e-tests-watch-2nkq4 deletion completed in 6.336605806s • [SLOW TEST:16.781 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:14:14.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0123 12:14:18.031852 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 12:14:18.032: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:14:18.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-m5tfv" for this suite. Jan 23 12:14:24.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:14:24.690: INFO: namespace: e2e-tests-gc-m5tfv, resource: bindings, ignored listing per whitelist Jan 23 12:14:24.748: INFO: namespace e2e-tests-gc-m5tfv deletion completed in 6.701068159s • [SLOW TEST:10.003 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:14:24.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:14:25.152: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e5216de6-3dd9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00175c702), BlockOwnerDeletion:(*bool)(0xc00175c703)}} Jan 23 12:14:25.446: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e51c9b0e-3dd9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00228f612), BlockOwnerDeletion:(*bool)(0xc00228f613)}} Jan 23 12:14:25.489: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e51efa87-3dd9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001dd544a), BlockOwnerDeletion:(*bool)(0xc001dd544b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:14:30.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z2v6q" for this suite. Jan 23 12:14:36.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:14:36.924: INFO: namespace: e2e-tests-gc-z2v6q, resource: bindings, ignored listing per whitelist Jan 23 12:14:37.083: INFO: namespace e2e-tests-gc-z2v6q deletion completed in 6.46813096s • [SLOW TEST:12.335 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:14:37.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ec7c047c-3dd9-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 12:14:37.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-27cpk" to be "success or failure" Jan 23 12:14:37.430: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.667486ms Jan 23 12:14:39.848: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475391813s Jan 23 12:14:41.877: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.504589823s Jan 23 12:14:43.945: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572254246s Jan 23 12:14:46.289: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.916255367s Jan 23 12:14:48.303: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.930116387s STEP: Saw pod success Jan 23 12:14:48.303: INFO: Pod "pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:14:48.328: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 23 12:14:48.487: INFO: Waiting for pod pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005 to disappear Jan 23 12:14:48.503: INFO: Pod pod-configmaps-ec8388cb-3dd9-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:14:48.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-27cpk" for this suite. Jan 23 12:14:55.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:14:55.080: INFO: namespace: e2e-tests-configmap-27cpk, resource: bindings, ignored listing per whitelist Jan 23 12:14:55.228: INFO: namespace e2e-tests-configmap-27cpk deletion completed in 6.69962871s • [SLOW TEST:18.144 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:14:55.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-f73f1c5d-3dd9-11ea-bb65-0242ac110005 STEP: Creating secret with name s-test-opt-upd-f73f1cac-3dd9-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f73f1c5d-3dd9-11ea-bb65-0242ac110005 STEP: Updating secret s-test-opt-upd-f73f1cac-3dd9-11ea-bb65-0242ac110005 STEP: Creating secret with name s-test-opt-create-f73f1cd3-3dd9-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:15:11.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4bjmw" for this suite. Jan 23 12:15:37.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:15:38.045: INFO: namespace: e2e-tests-projected-4bjmw, resource: bindings, ignored listing per whitelist Jan 23 12:15:38.062: INFO: namespace e2e-tests-projected-4bjmw deletion completed in 26.252994665s • [SLOW TEST:42.834 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:15:38.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 12:15:38.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:15:38.573: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 12:15:38.574: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 23 12:15:38.582: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 23 12:15:38.608: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 23 12:15:38.738: INFO: scanned /root for discovery docs: Jan 23 12:15:38.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:03.238: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 23 12:16:03.238: INFO: stdout: "Created e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734\nScaling up e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 23 12:16:03.238: INFO: stdout: "Created e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734\nScaling up e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 23 12:16:03.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:03.412: INFO: stderr: "" Jan 23 12:16:03.413: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:08.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:08.655: INFO: stderr: "" Jan 23 12:16:08.655: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:13.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:13.811: INFO: stderr: "" Jan 23 12:16:13.811: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:18.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:21.791: INFO: stderr: "" Jan 23 12:16:21.791: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:26.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:27.022: INFO: stderr: "" Jan 23 12:16:27.022: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:32.189: INFO: stderr: "" Jan 23 12:16:32.189: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:37.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:37.382: INFO: stderr: "" Jan 23 12:16:37.382: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:42.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:42.572: INFO: stderr: "" Jan 23 12:16:42.572: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:47.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:47.747: INFO: stderr: "" Jan 23 12:16:47.747: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:52.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:52.941: INFO: stderr: "" Jan 23 12:16:52.941: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:16:57.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:16:58.095: INFO: stderr: "" Jan 23 12:16:58.095: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:17:03.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:17:03.301: INFO: stderr: "" Jan 23 12:17:03.302: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx e2e-test-nginx-rc-8bbc5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 23 12:17:08.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:17:08.471: INFO: stderr: "" Jan 23 12:17:08.472: INFO: stdout: "e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx " Jan 23 12:17:08.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:17:08.623: INFO: stderr: "" Jan 23 12:17:08.623: INFO: stdout: "true" Jan 23 12:17:08.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:17:08.713: INFO: stderr: "" Jan 23 12:17:08.713: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 23 12:17:08.713: INFO: e2e-test-nginx-rc-37687c235dd4e94db9b4c61117d8a734-9j2xx is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 23 12:17:08.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-89z9w' Jan 23 12:17:08.863: INFO: stderr: "" Jan 23 12:17:08.863: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:17:08.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-89z9w" for this suite. Jan 23 12:17:30.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:17:31.125: INFO: namespace: e2e-tests-kubectl-89z9w, resource: bindings, ignored listing per whitelist Jan 23 12:17:31.144: INFO: namespace e2e-tests-kubectl-89z9w deletion completed in 22.270880995s • [SLOW TEST:113.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:17:31.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:17:41.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-s9shb" for this suite. Jan 23 12:18:23.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:18:23.696: INFO: namespace: e2e-tests-kubelet-test-s9shb, resource: bindings, ignored listing per whitelist Jan 23 12:18:23.714: INFO: namespace e2e-tests-kubelet-test-s9shb deletion completed in 42.252736167s • [SLOW TEST:52.569 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:18:23.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 23 12:18:23.928: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:18:45.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-r5dgm" for this suite. Jan 23 12:19:09.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:19:09.350: INFO: namespace: e2e-tests-init-container-r5dgm, resource: bindings, ignored listing per whitelist Jan 23 12:19:09.423: INFO: namespace e2e-tests-init-container-r5dgm deletion completed in 24.226302822s • [SLOW TEST:45.709 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:19:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 23 12:19:20.344: INFO: Successfully updated pod "labelsupdate8eca06e4-3dda-11ea-bb65-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:19:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4727q" for this suite. Jan 23 12:19:58.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:19:59.082: INFO: namespace: e2e-tests-projected-4727q, resource: bindings, ignored listing per whitelist Jan 23 12:19:59.155: INFO: namespace e2e-tests-projected-4727q deletion completed in 36.486641376s • [SLOW TEST:49.732 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:19:59.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:19:59.366: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 23 12:20:04.575: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 23 12:20:06.926: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 23 12:20:06.974: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-7d79t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7d79t/deployments/test-cleanup-deployment,UID:b0f6dd6d-3dda-11ea-a994-fa163e34d433,ResourceVersion:19187852,Generation:1,CreationTimestamp:2020-01-23 12:20:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 23 12:20:07.005: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 23 12:20:07.005: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 23 12:20:07.006: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-7d79t,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7d79t/replicasets/test-cleanup-controller,UID:ac70dbcb-3dda-11ea-a994-fa163e34d433,ResourceVersion:19187853,Generation:1,CreationTimestamp:2020-01-23 12:19:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b0f6dd6d-3dda-11ea-a994-fa163e34d433 0xc001cf3f9f 0xc001cf3fc0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 23 12:20:07.099: INFO: Pod "test-cleanup-controller-rrgwt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rrgwt,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-7d79t,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7d79t/pods/test-cleanup-controller-rrgwt,UID:ac753f90-3dda-11ea-a994-fa163e34d433,ResourceVersion:19187850,Generation:0,CreationTimestamp:2020-01-23 12:19:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ac70dbcb-3dda-11ea-a994-fa163e34d433 0xc000fe23b7 0xc000fe23b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rmhml {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rmhml,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rmhml true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000fe2730} {node.kubernetes.io/unreachable Exists NoExecute 0xc000fe2750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:19:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:20:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:20:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:19:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-23 12:19:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:20:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2316b7c245791b980fd28892e4e180b4595eec5c9cdd297d3e7040a9c3fa66e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:20:07.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7d79t" for this suite. Jan 23 12:20:17.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:20:17.601: INFO: namespace: e2e-tests-deployment-7d79t, resource: bindings, ignored listing per whitelist Jan 23 12:20:17.724: INFO: namespace e2e-tests-deployment-7d79t deletion completed in 10.509162932s • [SLOW TEST:18.569 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:20:17.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-ktsdm/configmap-test-b79f1ead-3dda-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 12:20:18.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-ktsdm" to be "success or failure" Jan 23 12:20:18.164: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.976439ms Jan 23 12:20:20.312: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16717744s Jan 23 12:20:22.346: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201180036s Jan 23 12:20:24.364: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219160445s Jan 23 12:20:26.385: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239994852s Jan 23 12:20:28.408: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262424199s Jan 23 12:20:30.422: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.276551354s STEP: Saw pod success Jan 23 12:20:30.422: INFO: Pod "pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:20:30.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005 container env-test: STEP: delete the pod Jan 23 12:20:31.720: INFO: Waiting for pod pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005 to disappear Jan 23 12:20:32.061: INFO: Pod pod-configmaps-b7a0c6ca-3dda-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:20:32.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ktsdm" for this suite. Jan 23 12:20:38.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:20:38.277: INFO: namespace: e2e-tests-configmap-ktsdm, resource: bindings, ignored listing per whitelist Jan 23 12:20:38.402: INFO: namespace e2e-tests-configmap-ktsdm deletion completed in 6.300379811s • [SLOW TEST:20.676 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:20:38.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 23 12:20:38.750: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gm4cd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gm4cd/configmaps/e2e-watch-test-resource-version,UID:c3e9126d-3dda-11ea-a994-fa163e34d433,ResourceVersion:19187951,Generation:0,CreationTimestamp:2020-01-23 12:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 12:20:38.750: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gm4cd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gm4cd/configmaps/e2e-watch-test-resource-version,UID:c3e9126d-3dda-11ea-a994-fa163e34d433,ResourceVersion:19187952,Generation:0,CreationTimestamp:2020-01-23 12:20:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:20:38.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-gm4cd" for this suite. Jan 23 12:20:44.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:20:44.940: INFO: namespace: e2e-tests-watch-gm4cd, resource: bindings, ignored listing per whitelist Jan 23 12:20:45.021: INFO: namespace e2e-tests-watch-gm4cd deletion completed in 6.259862703s • [SLOW TEST:6.619 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:20:45.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c7c564a3-3dda-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 12:20:45.230: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-v4tw4" to be "success or failure" Jan 23 12:20:45.236: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.832295ms Jan 23 12:20:47.254: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02332874s Jan 23 12:20:49.280: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0497573s Jan 23 12:20:51.361: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130574763s Jan 23 12:20:53.391: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160848811s Jan 23 12:20:55.407: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176787751s STEP: Saw pod success Jan 23 12:20:55.407: INFO: Pod "pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:20:55.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 12:20:56.238: INFO: Waiting for pod pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005 to disappear Jan 23 12:20:56.253: INFO: Pod pod-projected-configmaps-c7c8411a-3dda-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:20:56.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v4tw4" for this suite. Jan 23 12:21:02.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:21:02.887: INFO: namespace: e2e-tests-projected-v4tw4, resource: bindings, ignored listing per whitelist Jan 23 12:21:02.932: INFO: namespace e2e-tests-projected-v4tw4 deletion completed in 6.66028765s • [SLOW TEST:17.910 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:21:02.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 23 12:21:03.127: INFO: Waiting up to 5m0s for pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-mslcj" to be "success or failure" Jan 23 12:21:03.141: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.14989ms Jan 23 12:21:05.154: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026293796s Jan 23 12:21:07.181: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053460081s Jan 23 12:21:09.196: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068661649s Jan 23 12:21:11.341: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213523011s Jan 23 12:21:13.387: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.259391325s STEP: Saw pod success Jan 23 12:21:13.387: INFO: Pod "downward-api-d2731a84-3dda-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:21:13.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d2731a84-3dda-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 12:21:13.647: INFO: Waiting for pod downward-api-d2731a84-3dda-11ea-bb65-0242ac110005 to disappear Jan 23 12:21:13.656: INFO: Pod downward-api-d2731a84-3dda-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:21:13.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mslcj" for this suite. Jan 23 12:21:19.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:21:19.960: INFO: namespace: e2e-tests-downward-api-mslcj, resource: bindings, ignored listing per whitelist Jan 23 12:21:19.967: INFO: namespace e2e-tests-downward-api-mslcj deletion completed in 6.251257754s • [SLOW TEST:17.035 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:21:19.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 12:21:20.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-pk44j' Jan 23 12:21:20.400: INFO: stderr: "" Jan 23 12:21:20.400: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 23 12:21:20.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-pk44j' Jan 23 12:21:26.225: INFO: stderr: "" Jan 23 12:21:26.225: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:21:26.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pk44j" for this suite. Jan 23 12:21:32.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:21:32.388: INFO: namespace: e2e-tests-kubectl-pk44j, resource: bindings, ignored listing per whitelist Jan 23 12:21:32.526: INFO: namespace e2e-tests-kubectl-pk44j deletion completed in 6.280809125s • [SLOW TEST:12.559 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:21:32.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e41e68f2-3dda-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 12:21:32.794: INFO: Waiting up to 5m0s for pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-fj9cs" to be "success or failure" Jan 23 12:21:32.843: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.921422ms Jan 23 12:21:34.856: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062427773s Jan 23 12:21:36.872: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078097706s Jan 23 12:21:39.055: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260526353s Jan 23 12:21:41.072: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278115513s Jan 23 12:21:43.115: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.320498621s STEP: Saw pod success Jan 23 12:21:43.115: INFO: Pod "pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:21:43.128: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 12:21:43.260: INFO: Waiting for pod pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005 to disappear Jan 23 12:21:43.269: INFO: Pod pod-secrets-e41fc814-3dda-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:21:43.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fj9cs" for this suite. Jan 23 12:21:49.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:21:49.478: INFO: namespace: e2e-tests-secrets-fj9cs, resource: bindings, ignored listing per whitelist Jan 23 12:21:49.644: INFO: namespace e2e-tests-secrets-fj9cs deletion completed in 6.309184301s • [SLOW TEST:17.118 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:21:49.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4wr5h in namespace e2e-tests-proxy-9fnl5 I0123 12:21:49.874497 8 runners.go:184] Created replication controller with name: proxy-service-4wr5h, namespace: e2e-tests-proxy-9fnl5, replica count: 1 I0123 12:21:50.925246 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:51.925552 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:52.926001 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:53.926690 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:54.927088 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:55.927397 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:56.927878 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:57.928174 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 12:21:58.928455 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:21:59.928762 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:00.929018 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:01.929422 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:02.929751 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:03.930429 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:04.930880 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:05.931205 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:06.931487 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:07.931801 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 12:22:08.932233 8 runners.go:184] proxy-service-4wr5h Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 12:22:08.947: INFO: setup took 19.195545128s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 23 12:22:08.976: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9fnl5/pods/http:proxy-service-4wr5h-z2dgf:162/proxy/: bar (200; 28.469164ms) Jan 23 12:22:08.976: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9fnl5/pods/http:proxy-service-4wr5h-z2dgf:160/proxy/: foo (200; 28.362388ms) Jan 23 12:22:08.981: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9fnl5/pods/proxy-service-4wr5h-z2dgf:162/proxy/: bar (200; 34.236527ms) Jan 23 12:22:08.982: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9fnl5/pods/proxy-service-4wr5h-z2dgf:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-01681983-3ddb-11ea-bb65-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-01681963-3ddb-11ea-bb65-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 23 12:22:21.936: INFO: Waiting up to 5m0s for pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-89ps8" to be "success or failure" Jan 23 12:22:22.056: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 119.690768ms Jan 23 12:22:24.161: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224730056s Jan 23 12:22:26.174: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238237984s Jan 23 12:22:28.351: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415249562s Jan 23 12:22:30.380: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443752786s Jan 23 12:22:32.399: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462529366s STEP: Saw pod success Jan 23 12:22:32.399: INFO: Pod "projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:22:32.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005 container projected-all-volume-test: STEP: delete the pod Jan 23 12:22:32.555: INFO: Waiting for pod projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005 to disappear Jan 23 12:22:32.606: INFO: Pod projected-volume-016818f7-3ddb-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:22:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-89ps8" for this suite. Jan 23 12:22:38.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:22:38.824: INFO: namespace: e2e-tests-projected-89ps8, resource: bindings, ignored listing per whitelist Jan 23 12:22:38.845: INFO: namespace e2e-tests-projected-89ps8 deletion completed in 6.18916473s • [SLOW TEST:17.159 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:22:38.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 23 12:22:49.107: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0b949405-3ddb-11ea-bb65-0242ac110005,GenerateName:,Namespace:e2e-tests-events-62ktw,SelfLink:/api/v1/namespaces/e2e-tests-events-62ktw/pods/send-events-0b949405-3ddb-11ea-bb65-0242ac110005,UID:0ba2efcb-3ddb-11ea-a994-fa163e34d433,ResourceVersion:19188288,Generation:0,CreationTimestamp:2020-01-23 12:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 964035260,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pwlx8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlx8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pwlx8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023210f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002321110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:22:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:22:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:22:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:22:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-23 12:22:39 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-23 12:22:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://94a86e18d754428239bcbb573719903233d648891776f6c83c2ba0cf6edb009c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 23 12:22:51.165: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 23 12:22:53.202: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:22:53.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-62ktw" for this suite. Jan 23 12:23:33.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:23:33.386: INFO: namespace: e2e-tests-events-62ktw, resource: bindings, ignored listing per whitelist Jan 23 12:23:33.392: INFO: namespace e2e-tests-events-62ktw deletion completed in 40.145703408s • [SLOW TEST:54.547 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:23:33.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-hqk92 Jan 23 12:23:43.670: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-hqk92 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 12:23:43.677: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:27:44.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hqk92" for this suite. Jan 23 12:27:52.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:27:52.712: INFO: namespace: e2e-tests-container-probe-hqk92, resource: bindings, ignored listing per whitelist Jan 23 12:27:52.715: INFO: namespace e2e-tests-container-probe-hqk92 deletion completed in 8.304915063s • [SLOW TEST:259.321 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:27:52.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:27:52.955: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:28:03.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qdw9c" for this suite. Jan 23 12:28:46.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:28:46.158: INFO: namespace: e2e-tests-pods-qdw9c, resource: bindings, ignored listing per whitelist Jan 23 12:28:46.240: INFO: namespace e2e-tests-pods-qdw9c deletion completed in 42.39167223s • [SLOW TEST:53.525 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:28:46.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-5c9p5 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-5c9p5 STEP: Deleting pre-stop pod Jan 23 12:29:09.718: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:29:09.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-5c9p5" for this suite. Jan 23 12:29:43.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:29:43.956: INFO: namespace: e2e-tests-prestop-5c9p5, resource: bindings, ignored listing per whitelist Jan 23 12:29:43.981: INFO: namespace e2e-tests-prestop-5c9p5 deletion completed in 34.228675702s • [SLOW TEST:57.741 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:29:43.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-j8r44 Jan 23 12:29:54.270: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-j8r44 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 12:29:54.314: INFO: Initial restart count of pod liveness-http is 0 Jan 23 12:30:20.763: INFO: Restart count of pod e2e-tests-container-probe-j8r44/liveness-http is now 1 (26.44869753s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:30:20.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j8r44" for this suite. Jan 23 12:30:28.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:30:29.151: INFO: namespace: e2e-tests-container-probe-j8r44, resource: bindings, ignored listing per whitelist Jan 23 12:30:29.300: INFO: namespace e2e-tests-container-probe-j8r44 deletion completed in 8.353540069s • [SLOW TEST:45.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:30:29.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 23 12:30:29.532: INFO: Waiting up to 5m0s for pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005" in namespace "e2e-tests-var-expansion-dqv6n" to be "success or failure" Jan 23 12:30:29.557: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.336871ms Jan 23 12:30:31.595: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063087347s Jan 23 12:30:33.615: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083454241s Jan 23 12:30:35.858: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32563201s Jan 23 12:30:37.904: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.372241419s Jan 23 12:30:39.920: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.388171798s STEP: Saw pod success Jan 23 12:30:39.920: INFO: Pod "var-expansion-240da112-3ddc-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:30:39.926: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-240da112-3ddc-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 12:30:40.921: INFO: Waiting for pod var-expansion-240da112-3ddc-11ea-bb65-0242ac110005 to disappear Jan 23 12:30:40.945: INFO: Pod var-expansion-240da112-3ddc-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:30:40.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-dqv6n" for this suite. Jan 23 12:30:47.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:30:47.233: INFO: namespace: e2e-tests-var-expansion-dqv6n, resource: bindings, ignored listing per whitelist Jan 23 12:30:47.262: INFO: namespace e2e-tests-var-expansion-dqv6n deletion completed in 6.302926638s • [SLOW TEST:17.962 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:30:47.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:31:11.521: INFO: Container started at 2020-01-23 12:30:55 +0000 UTC, pod became ready at 2020-01-23 12:31:11 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:31:11.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f8b2t" for this suite. Jan 23 12:31:33.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:31:33.774: INFO: namespace: e2e-tests-container-probe-f8b2t, resource: bindings, ignored listing per whitelist Jan 23 12:31:33.985: INFO: namespace e2e-tests-container-probe-f8b2t deletion completed in 22.454632461s • [SLOW TEST:46.722 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:31:33.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 23 12:31:34.237: INFO: Waiting up to 5m0s for pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-vm7pj" to be "success or failure" Jan 23 12:31:34.403: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 165.42947ms Jan 23 12:31:36.417: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179379039s Jan 23 12:31:38.427: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190079352s Jan 23 12:31:40.568: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331047233s Jan 23 12:31:42.596: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35825466s Jan 23 12:31:44.631: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.393608093s STEP: Saw pod success Jan 23 12:31:44.631: INFO: Pod "pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:31:44.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 12:31:44.869: INFO: Waiting for pod pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005 to disappear Jan 23 12:31:44.894: INFO: Pod pod-4a9e1d24-3ddc-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:31:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vm7pj" for this suite. Jan 23 12:31:50.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:31:51.017: INFO: namespace: e2e-tests-emptydir-vm7pj, resource: bindings, ignored listing per whitelist Jan 23 12:31:51.120: INFO: namespace e2e-tests-emptydir-vm7pj deletion completed in 6.216204923s • [SLOW TEST:17.134 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:31:51.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gskhf STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 23 12:31:51.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 23 12:32:28.003: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-gskhf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 12:32:28.003: INFO: >>> kubeConfig: /root/.kube/config I0123 12:32:28.144255 8 log.go:172] (0xc0008bd810) (0xc00102fcc0) Create stream I0123 12:32:28.144352 8 log.go:172] (0xc0008bd810) (0xc00102fcc0) Stream added, broadcasting: 1 I0123 12:32:28.152542 8 log.go:172] (0xc0008bd810) Reply frame received for 1 I0123 12:32:28.152609 8 log.go:172] (0xc0008bd810) (0xc002571e00) Create stream I0123 12:32:28.152628 8 log.go:172] (0xc0008bd810) (0xc002571e00) Stream added, broadcasting: 3 I0123 12:32:28.162321 8 log.go:172] (0xc0008bd810) Reply frame received for 3 I0123 12:32:28.162383 8 log.go:172] (0xc0008bd810) (0xc0026080a0) Create stream I0123 12:32:28.162410 8 log.go:172] (0xc0008bd810) (0xc0026080a0) Stream added, broadcasting: 5 I0123 12:32:28.164718 8 log.go:172] (0xc0008bd810) Reply frame received for 5 I0123 12:32:28.453038 8 log.go:172] (0xc0008bd810) Data frame received for 3 I0123 12:32:28.453089 8 log.go:172] (0xc002571e00) (3) Data frame handling I0123 12:32:28.453104 8 log.go:172] (0xc002571e00) (3) Data frame sent I0123 12:32:28.707185 8 log.go:172] (0xc0008bd810) Data frame received for 1 I0123 12:32:28.707310 8 log.go:172] (0xc0008bd810) (0xc0026080a0) Stream removed, broadcasting: 5 I0123 12:32:28.707475 8 log.go:172] (0xc00102fcc0) (1) Data frame handling I0123 12:32:28.707551 8 log.go:172] (0xc00102fcc0) (1) Data frame sent I0123 12:32:28.707623 8 log.go:172] (0xc0008bd810) (0xc002571e00) Stream removed, broadcasting: 3 I0123 12:32:28.707733 8 log.go:172] (0xc0008bd810) (0xc00102fcc0) Stream removed, broadcasting: 1 I0123 12:32:28.708048 8 log.go:172] (0xc0008bd810) Go away received I0123 12:32:28.708214 8 log.go:172] (0xc0008bd810) (0xc00102fcc0) Stream removed, broadcasting: 1 I0123 12:32:28.708246 8 log.go:172] (0xc0008bd810) (0xc002571e00) Stream removed, broadcasting: 3 I0123 12:32:28.708266 8 log.go:172] (0xc0008bd810) (0xc0026080a0) Stream removed, broadcasting: 5 Jan 23 12:32:28.708: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:32:28.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gskhf" for this suite. Jan 23 12:32:56.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:32:56.941: INFO: namespace: e2e-tests-pod-network-test-gskhf, resource: bindings, ignored listing per whitelist Jan 23 12:32:56.952: INFO: namespace e2e-tests-pod-network-test-gskhf deletion completed in 28.224429765s • [SLOW TEST:65.832 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:32:56.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7c150712-3ddc-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7c150712-3ddc-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:33:09.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t7zgc" for this suite. Jan 23 12:33:33.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:33:33.511: INFO: namespace: e2e-tests-projected-t7zgc, resource: bindings, ignored listing per whitelist Jan 23 12:33:33.573: INFO: namespace e2e-tests-projected-t7zgc deletion completed in 24.123924329s • [SLOW TEST:36.620 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:33:33.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 23 12:33:33.733: INFO: namespace e2e-tests-kubectl-9hkn9 Jan 23 12:33:33.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9hkn9' Jan 23 12:33:36.069: INFO: stderr: "" Jan 23 12:33:36.069: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 23 12:33:37.324: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:37.324: INFO: Found 0 / 1 Jan 23 12:33:38.085: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:38.085: INFO: Found 0 / 1 Jan 23 12:33:39.160: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:39.160: INFO: Found 0 / 1 Jan 23 12:33:40.078: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:40.078: INFO: Found 0 / 1 Jan 23 12:33:41.104: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:41.104: INFO: Found 0 / 1 Jan 23 12:33:42.162: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:42.162: INFO: Found 0 / 1 Jan 23 12:33:43.085: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:43.085: INFO: Found 0 / 1 Jan 23 12:33:44.090: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:44.091: INFO: Found 0 / 1 Jan 23 12:33:45.079: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:45.079: INFO: Found 0 / 1 Jan 23 12:33:46.149: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:46.149: INFO: Found 1 / 1 Jan 23 12:33:46.149: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 23 12:33:46.162: INFO: Selector matched 1 pods for map[app:redis] Jan 23 12:33:46.162: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 12:33:46.162: INFO: wait on redis-master startup in e2e-tests-kubectl-9hkn9 Jan 23 12:33:46.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ptwcs redis-master --namespace=e2e-tests-kubectl-9hkn9' Jan 23 12:33:46.409: INFO: stderr: "" Jan 23 12:33:46.409: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jan 12:33:44.758 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 12:33:44.758 # Server started, Redis version 3.2.12\n1:M 23 Jan 12:33:44.758 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 12:33:44.759 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 23 12:33:46.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-9hkn9' Jan 23 12:33:46.813: INFO: stderr: "" Jan 23 12:33:46.813: INFO: stdout: "service/rm2 exposed\n" Jan 23 12:33:46.828: INFO: Service rm2 in namespace e2e-tests-kubectl-9hkn9 found. STEP: exposing service Jan 23 12:33:48.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-9hkn9' Jan 23 12:33:49.187: INFO: stderr: "" Jan 23 12:33:49.187: INFO: stdout: "service/rm3 exposed\n" Jan 23 12:33:49.226: INFO: Service rm3 in namespace e2e-tests-kubectl-9hkn9 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:33:51.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9hkn9" for this suite. Jan 23 12:34:17.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:34:17.489: INFO: namespace: e2e-tests-kubectl-9hkn9, resource: bindings, ignored listing per whitelist Jan 23 12:34:17.517: INFO: namespace e2e-tests-kubectl-9hkn9 deletion completed in 26.243999487s • [SLOW TEST:43.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:34:17.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 23 12:34:25.975: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:34:58.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-djvz8" for this suite. Jan 23 12:35:04.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:35:04.415: INFO: namespace: e2e-tests-namespaces-djvz8, resource: bindings, ignored listing per whitelist Jan 23 12:35:04.458: INFO: namespace e2e-tests-namespaces-djvz8 deletion completed in 6.279657775s STEP: Destroying namespace "e2e-tests-nsdeletetest-hn8ct" for this suite. Jan 23 12:35:04.463: INFO: Namespace e2e-tests-nsdeletetest-hn8ct was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-zfkdc" for this suite. Jan 23 12:35:10.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:35:10.699: INFO: namespace: e2e-tests-nsdeletetest-zfkdc, resource: bindings, ignored listing per whitelist Jan 23 12:35:10.714: INFO: namespace e2e-tests-nsdeletetest-zfkdc deletion completed in 6.251784009s • [SLOW TEST:53.195 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:35:10.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 23 12:35:10.905: INFO: Waiting up to 5m0s for pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-pkqc4" to be "success or failure" Jan 23 12:35:10.926: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.601199ms Jan 23 12:35:13.058: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153119214s Jan 23 12:35:15.094: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188917859s Jan 23 12:35:17.263: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35787236s Jan 23 12:35:19.273: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.36844043s Jan 23 12:35:21.283: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.378529105s Jan 23 12:35:23.313: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.408378408s STEP: Saw pod success Jan 23 12:35:23.313: INFO: Pod "downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:35:23.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 12:35:23.851: INFO: Waiting for pod downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005 to disappear Jan 23 12:35:23.882: INFO: Pod downward-api-cbbcdce5-3ddc-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:35:23.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pkqc4" for this suite. Jan 23 12:35:30.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:35:30.863: INFO: namespace: e2e-tests-downward-api-pkqc4, resource: bindings, ignored listing per whitelist Jan 23 12:35:31.216: INFO: namespace e2e-tests-downward-api-pkqc4 deletion completed in 7.259543329s • [SLOW TEST:20.502 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:35:31.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 23 12:35:31.426: INFO: Waiting up to 5m0s for pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-m7vsd" to be "success or failure" Jan 23 12:35:31.448: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.609636ms Jan 23 12:35:33.457: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030996902s Jan 23 12:35:35.477: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05042765s Jan 23 12:35:37.506: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079623039s Jan 23 12:35:39.519: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092234353s Jan 23 12:35:41.527: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101098052s STEP: Saw pod success Jan 23 12:35:41.527: INFO: Pod "pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:35:41.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 12:35:41.596: INFO: Waiting for pod pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005 to disappear Jan 23 12:35:41.604: INFO: Pod pod-d7fe2aae-3ddc-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:35:41.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m7vsd" for this suite. Jan 23 12:35:47.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:35:47.716: INFO: namespace: e2e-tests-emptydir-m7vsd, resource: bindings, ignored listing per whitelist Jan 23 12:35:47.804: INFO: namespace e2e-tests-emptydir-m7vsd deletion completed in 6.188774895s • [SLOW TEST:16.587 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:35:47.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 23 12:35:49.551: INFO: Pod name wrapped-volume-race-e2c8c2b8-3ddc-11ea-bb65-0242ac110005: Found 0 pods out of 5 Jan 23 12:35:54.593: INFO: Pod name wrapped-volume-race-e2c8c2b8-3ddc-11ea-bb65-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e2c8c2b8-3ddc-11ea-bb65-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kqfmz, will wait for the garbage collector to delete the pods Jan 23 12:38:28.773: INFO: Deleting ReplicationController wrapped-volume-race-e2c8c2b8-3ddc-11ea-bb65-0242ac110005 took: 31.471792ms Jan 23 12:38:28.974: INFO: Terminating ReplicationController wrapped-volume-race-e2c8c2b8-3ddc-11ea-bb65-0242ac110005 pods took: 201.151287ms STEP: Creating RC which spawns configmap-volume pods Jan 23 12:39:13.672: INFO: Pod name wrapped-volume-race-5c6a4444-3ddd-11ea-bb65-0242ac110005: Found 0 pods out of 5 Jan 23 12:39:18.847: INFO: Pod name wrapped-volume-race-5c6a4444-3ddd-11ea-bb65-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5c6a4444-3ddd-11ea-bb65-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kqfmz, will wait for the garbage collector to delete the pods Jan 23 12:41:15.058: INFO: Deleting ReplicationController wrapped-volume-race-5c6a4444-3ddd-11ea-bb65-0242ac110005 took: 34.819036ms Jan 23 12:41:15.359: INFO: Terminating ReplicationController wrapped-volume-race-5c6a4444-3ddd-11ea-bb65-0242ac110005 pods took: 300.457104ms STEP: Creating RC which spawns configmap-volume pods Jan 23 12:42:03.869: INFO: Pod name wrapped-volume-race-c1cb0e63-3ddd-11ea-bb65-0242ac110005: Found 0 pods out of 5 Jan 23 12:42:08.922: INFO: Pod name wrapped-volume-race-c1cb0e63-3ddd-11ea-bb65-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c1cb0e63-3ddd-11ea-bb65-0242ac110005 in namespace e2e-tests-emptydir-wrapper-kqfmz, will wait for the garbage collector to delete the pods Jan 23 12:43:53.066: INFO: Deleting ReplicationController wrapped-volume-race-c1cb0e63-3ddd-11ea-bb65-0242ac110005 took: 24.212202ms Jan 23 12:43:53.467: INFO: Terminating ReplicationController wrapped-volume-race-c1cb0e63-3ddd-11ea-bb65-0242ac110005 pods took: 400.518677ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:44:44.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kqfmz" for this suite. Jan 23 12:44:56.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:44:56.620: INFO: namespace: e2e-tests-emptydir-wrapper-kqfmz, resource: bindings, ignored listing per whitelist Jan 23 12:44:56.633: INFO: namespace e2e-tests-emptydir-wrapper-kqfmz deletion completed in 12.225738612s • [SLOW TEST:548.829 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:44:56.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 12:44:56.934: INFO: Creating deployment "nginx-deployment" Jan 23 12:44:56.952: INFO: Waiting for observed generation 1 Jan 23 12:44:59.360: INFO: Waiting for all required pods to come up Jan 23 12:44:59.384: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 23 12:45:39.639: INFO: Waiting for deployment "nginx-deployment" to complete Jan 23 12:45:39.655: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 23 12:45:39.671: INFO: Updating deployment nginx-deployment Jan 23 12:45:39.671: INFO: Waiting for observed generation 2 Jan 23 12:45:41.985: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 23 12:45:43.873: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 23 12:45:43.879: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 23 12:45:44.225: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 23 12:45:44.225: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 23 12:45:44.256: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 23 12:45:44.490: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 23 12:45:44.491: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 23 12:45:44.547: INFO: Updating deployment nginx-deployment Jan 23 12:45:44.547: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 23 12:45:45.733: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 23 12:45:48.746: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 23 12:45:50.806: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8pq9l/deployments/nginx-deployment,UID:2912fbdb-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190945,Generation:3,CreationTimestamp:2020-01-23 12:44:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-23 12:45:40 +0000 UTC 2020-01-23 12:44:56 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-23 12:45:47 +0000 UTC 2020-01-23 12:45:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 23 12:45:51.295: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8pq9l/replicasets/nginx-deployment-5c98f8fb5,UID:428d3861-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190986,Generation:3,CreationTimestamp:2020-01-23 12:45:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2912fbdb-3dde-11ea-a994-fa163e34d433 0xc0023d53b7 0xc0023d53b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 23 12:45:51.295: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 23 12:45:51.296: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8pq9l/replicasets/nginx-deployment-85ddf47c5d,UID:2916cae4-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190987,Generation:3,CreationTimestamp:2020-01-23 12:44:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2912fbdb-3dde-11ea-a994-fa163e34d433 0xc0023d5477 0xc0023d5478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 23 12:45:51.758: INFO: Pod "nginx-deployment-5c98f8fb5-2tlkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2tlkd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-2tlkd,UID:429e0840-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190905,Generation:0,CreationTimestamp:2020-01-23 12:45:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021387 0xc002021388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020213f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.759: INFO: Pod "nginx-deployment-5c98f8fb5-5gq4g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5gq4g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-5gq4g,UID:47409d50-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190975,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc0020214d7 0xc0020214d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.759: INFO: Pod "nginx-deployment-5c98f8fb5-h5kjp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h5kjp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-h5kjp,UID:429e0da4-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190919,Generation:0,CreationTimestamp:2020-01-23 12:45:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc0020215e7 0xc0020215e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.760: INFO: Pod "nginx-deployment-5c98f8fb5-jxpv8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jxpv8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-jxpv8,UID:473b51b8-3dde-11ea-a994-fa163e34d433,ResourceVersion:19191001,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021737 0xc002021738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020217a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020217c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.760: INFO: Pod "nginx-deployment-5c98f8fb5-lfqcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lfqcs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-lfqcs,UID:4761fa6a-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190970,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021887 0xc002021888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020218f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.760: INFO: Pod "nginx-deployment-5c98f8fb5-lppsj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lppsj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-lppsj,UID:4761c8db-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190967,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021987 0xc002021988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.760: INFO: Pod "nginx-deployment-5c98f8fb5-pjgf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pjgf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-pjgf8,UID:4761e0e6-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190971,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021ab7 0xc002021ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.761: INFO: Pod "nginx-deployment-5c98f8fb5-pq6hp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pq6hp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-pq6hp,UID:4740be9c-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190972,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021c37 0xc002021c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.761: INFO: Pod "nginx-deployment-5c98f8fb5-qjwk4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qjwk4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-qjwk4,UID:47722fd1-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190979,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021da7 0xc002021da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.761: INFO: Pod "nginx-deployment-5c98f8fb5-qlqvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qlqvx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-qlqvx,UID:429a2a5d-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190894,Generation:0,CreationTimestamp:2020-01-23 12:45:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002021ea7 0xc002021ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002021f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002021f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.762: INFO: Pod "nginx-deployment-5c98f8fb5-qv22x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qv22x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-qv22x,UID:42e86ef9-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190924,Generation:0,CreationTimestamp:2020-01-23 12:45:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc0022840d7 0xc0022840d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022841b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022841d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.762: INFO: Pod "nginx-deployment-5c98f8fb5-rh86g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rh86g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-rh86g,UID:42e43b47-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190923,Generation:0,CreationTimestamp:2020-01-23 12:45:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc002284397 0xc002284398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:40 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.762: INFO: Pod "nginx-deployment-5c98f8fb5-xk89l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xk89l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-5c98f8fb5-xk89l,UID:47615892-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190959,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 428d3861-3dde-11ea-a994-fa163e34d433 0xc0022845f7 0xc0022845f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.763: INFO: Pod "nginx-deployment-85ddf47c5d-2bc2t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2bc2t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-2bc2t,UID:292eab00-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190841,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022846f7 0xc0022846f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022848a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022848c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5937cfe15d0d37cb7eb5cd59fb0ad694ab554b01e4258b7ee3cd767c9265e4fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.763: INFO: Pod "nginx-deployment-85ddf47c5d-2r7jl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2r7jl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-2r7jl,UID:476595be-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190965,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002284987 0xc002284988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.763: INFO: Pod "nginx-deployment-85ddf47c5d-2tfwt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2tfwt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-2tfwt,UID:477273b3-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190984,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002284b47 0xc002284b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.763: INFO: Pod "nginx-deployment-85ddf47c5d-5cgch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5cgch,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-5cgch,UID:47721a00-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190985,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002284cd7 0xc002284cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:49 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.764: INFO: Pod "nginx-deployment-85ddf47c5d-9ghsd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9ghsd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-9ghsd,UID:2957e2a9-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190836,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002284de7 0xc002284de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002284eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002284ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-23 12:45:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0066900b4b614dabdc769b68fb74b4f03cf4823eb663bb22ab8f3da1cb65ea71}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.764: INFO: Pod "nginx-deployment-85ddf47c5d-b95gn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b95gn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-b95gn,UID:47724c13-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190982,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022851c7 0xc0022851c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.764: INFO: Pod "nginx-deployment-85ddf47c5d-bkcqc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bkcqc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-bkcqc,UID:2939ffcc-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190848,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022852c7 0xc0022852c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5ab7f53058ac34ff78d9616fca038c6d71999f44683ccf3f6d716c2252cb252f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.765: INFO: Pod "nginx-deployment-85ddf47c5d-cqmnd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cqmnd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-cqmnd,UID:4743500d-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190968,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285417 0xc002285418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022854a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.765: INFO: Pod "nginx-deployment-85ddf47c5d-crwbp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-crwbp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-crwbp,UID:29586ffa-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190859,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285517 0xc002285518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285580} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022855a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-23 12:44:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://28f0babeab85f799ff51fb60f7b1cdae934feca6e63ab1b7da09663de24e3959}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.765: INFO: Pod "nginx-deployment-85ddf47c5d-h89k6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h89k6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-h89k6,UID:2939bc2e-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190856,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285667 0xc002285668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022856d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022856f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fd0088b663e35f0d12592d26029df57aac3a503443d66d93ef5f694007c5b5d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.766: INFO: Pod "nginx-deployment-85ddf47c5d-hk945" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hk945,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-hk945,UID:476549fb-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190973,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022857b7 0xc0022857b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285820} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.766: INFO: Pod "nginx-deployment-85ddf47c5d-ktsrw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ktsrw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-ktsrw,UID:47726933-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190983,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022858b7 0xc0022858b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285920} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.766: INFO: Pod "nginx-deployment-85ddf47c5d-mpqdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mpqdj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-mpqdj,UID:473b0939-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190995,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc0022859b7 0xc0022859b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 12:45:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.767: INFO: Pod "nginx-deployment-85ddf47c5d-n4h76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n4h76,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-n4h76,UID:47718d0c-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190981,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285af7 0xc002285af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.767: INFO: Pod "nginx-deployment-85ddf47c5d-pjvjk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pjvjk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-pjvjk,UID:476569ad-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190964,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285bf7 0xc002285bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.767: INFO: Pod "nginx-deployment-85ddf47c5d-prn6q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-prn6q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-prn6q,UID:293a0bf3-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190835,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285cf7 0xc002285cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4cecc77344f8c04b25a344ba005210ebe3dc813522d57264bf9b163db44dce5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.768: INFO: Pod "nginx-deployment-85ddf47c5d-vrpsk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vrpsk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-vrpsk,UID:4765b865-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190966,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285e47 0xc002285e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.768: INFO: Pod "nginx-deployment-85ddf47c5d-w9jst" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w9jst,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-w9jst,UID:47433643-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190969,Generation:0,CreationTimestamp:2020-01-23 12:45:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc002285f47 0xc002285f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002285fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002285fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.768: INFO: Pod "nginx-deployment-85ddf47c5d-wjxkt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wjxkt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-wjxkt,UID:293a3d19-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190845,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc001b96047 0xc001b96048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b960b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b960d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://caa1e50f718267c626b03621c0739a73bc756d42cfbe0a46ff9f06d35ddb4897}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 23 12:45:51.768: INFO: Pod "nginx-deployment-85ddf47c5d-zjlrh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zjlrh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8pq9l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8pq9l/pods/nginx-deployment-85ddf47c5d-zjlrh,UID:2930fe50-3dde-11ea-a994-fa163e34d433,ResourceVersion:19190868,Generation:0,CreationTimestamp:2020-01-23 12:44:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2916cae4-3dde-11ea-a994-fa163e34d433 0xc001b961d7 0xc001b961d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s854z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s854z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s854z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b96240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b96260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:45:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:44:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-23 12:44:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 12:45:32 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ecdfa936057634dba93bd0d222f14e95fd3dae9746e7cae11febf5a989521034}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:45:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8pq9l" for this suite. Jan 23 12:46:51.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:46:52.653: INFO: namespace: e2e-tests-deployment-8pq9l, resource: bindings, ignored listing per whitelist Jan 23 12:46:52.723: INFO: namespace e2e-tests-deployment-8pq9l deletion completed in 59.152039861s • [SLOW TEST:116.090 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:46:52.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-j58x4 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-j58x4 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-j58x4 Jan 23 12:46:53.253: INFO: Found 0 stateful pods, waiting for 1 Jan 23 12:47:03.279: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 12:47:13.262: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 23 12:47:23.272: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 23 12:47:23.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:47:24.169: INFO: stderr: "I0123 12:47:23.534739 3778 log.go:172] (0xc00070e370) (0xc0005f52c0) Create stream\nI0123 12:47:23.535095 3778 log.go:172] (0xc00070e370) (0xc0005f52c0) Stream added, broadcasting: 1\nI0123 12:47:23.543990 3778 log.go:172] (0xc00070e370) Reply frame received for 1\nI0123 12:47:23.544084 3778 log.go:172] (0xc00070e370) (0xc00070c000) Create stream\nI0123 12:47:23.544112 3778 log.go:172] (0xc00070e370) (0xc00070c000) Stream added, broadcasting: 3\nI0123 12:47:23.545683 3778 log.go:172] (0xc00070e370) Reply frame received for 3\nI0123 12:47:23.545706 3778 log.go:172] (0xc00070e370) (0xc0005f5360) Create stream\nI0123 12:47:23.545717 3778 log.go:172] (0xc00070e370) (0xc0005f5360) Stream added, broadcasting: 5\nI0123 12:47:23.547247 3778 log.go:172] (0xc00070e370) Reply frame received for 5\nI0123 12:47:23.897888 3778 log.go:172] (0xc00070e370) Data frame received for 3\nI0123 12:47:23.897971 3778 log.go:172] (0xc00070c000) (3) Data frame handling\nI0123 12:47:23.898009 3778 log.go:172] (0xc00070c000) (3) Data frame sent\nI0123 12:47:24.156138 3778 log.go:172] (0xc00070e370) (0xc00070c000) Stream removed, broadcasting: 3\nI0123 12:47:24.156456 3778 log.go:172] (0xc00070e370) Data frame received for 1\nI0123 12:47:24.156490 3778 log.go:172] (0xc00070e370) (0xc0005f5360) Stream removed, broadcasting: 5\nI0123 12:47:24.156509 3778 log.go:172] (0xc0005f52c0) (1) Data frame handling\nI0123 12:47:24.156541 3778 log.go:172] (0xc0005f52c0) (1) Data frame sent\nI0123 12:47:24.156558 3778 log.go:172] (0xc00070e370) (0xc0005f52c0) Stream removed, broadcasting: 1\nI0123 12:47:24.156573 3778 log.go:172] (0xc00070e370) Go away received\nI0123 12:47:24.157038 3778 log.go:172] (0xc00070e370) (0xc0005f52c0) Stream removed, broadcasting: 1\nI0123 12:47:24.157066 3778 log.go:172] (0xc00070e370) (0xc00070c000) Stream removed, broadcasting: 3\nI0123 12:47:24.157075 3778 log.go:172] (0xc00070e370) (0xc0005f5360) Stream removed, broadcasting: 5\n" Jan 23 12:47:24.169: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:47:24.169: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:47:24.191: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:47:24.191: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:47:24.335: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:47:24.335: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:47:24.335: INFO: Jan 23 12:47:24.335: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 23 12:47:25.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98294643s Jan 23 12:47:26.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.687781632s Jan 23 12:47:28.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.494728221s Jan 23 12:47:29.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.133147143s Jan 23 12:47:30.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.978946074s Jan 23 12:47:31.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.93674068s Jan 23 12:47:33.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.910319526s Jan 23 12:47:34.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 281.445746ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-j58x4 Jan 23 12:47:35.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:47:36.615: INFO: stderr: "I0123 12:47:36.025864 3799 log.go:172] (0xc00013a6e0) (0xc0006d6780) Create stream\nI0123 12:47:36.025977 3799 log.go:172] (0xc00013a6e0) (0xc0006d6780) Stream added, broadcasting: 1\nI0123 12:47:36.041336 3799 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0123 12:47:36.041399 3799 log.go:172] (0xc00013a6e0) (0xc00020a460) Create stream\nI0123 12:47:36.041410 3799 log.go:172] (0xc00013a6e0) (0xc00020a460) Stream added, broadcasting: 3\nI0123 12:47:36.049481 3799 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0123 12:47:36.049499 3799 log.go:172] (0xc00013a6e0) (0xc000656c80) Create stream\nI0123 12:47:36.049505 3799 log.go:172] (0xc00013a6e0) (0xc000656c80) Stream added, broadcasting: 5\nI0123 12:47:36.050975 3799 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0123 12:47:36.285449 3799 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0123 12:47:36.285506 3799 log.go:172] (0xc00020a460) (3) Data frame handling\nI0123 12:47:36.285531 3799 log.go:172] (0xc00020a460) (3) Data frame sent\nI0123 12:47:36.602656 3799 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0123 12:47:36.602771 3799 log.go:172] (0xc00013a6e0) (0xc00020a460) Stream removed, broadcasting: 3\nI0123 12:47:36.602889 3799 log.go:172] (0xc0006d6780) (1) Data frame handling\nI0123 12:47:36.602925 3799 log.go:172] (0xc0006d6780) (1) Data frame sent\nI0123 12:47:36.602970 3799 log.go:172] (0xc00013a6e0) (0xc000656c80) Stream removed, broadcasting: 5\nI0123 12:47:36.603009 3799 log.go:172] (0xc00013a6e0) (0xc0006d6780) Stream removed, broadcasting: 1\nI0123 12:47:36.603027 3799 log.go:172] (0xc00013a6e0) Go away received\nI0123 12:47:36.603725 3799 log.go:172] (0xc00013a6e0) (0xc0006d6780) Stream removed, broadcasting: 1\nI0123 12:47:36.603748 3799 log.go:172] (0xc00013a6e0) (0xc00020a460) Stream removed, broadcasting: 3\nI0123 12:47:36.603765 3799 log.go:172] (0xc00013a6e0) (0xc000656c80) Stream removed, broadcasting: 5\n" Jan 23 12:47:36.615: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:47:36.615: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:47:36.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:47:36.791: INFO: rc: 1 Jan 23 12:47:36.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000fda120 exit status 1 true [0xc000c44060 0xc000c44078 0xc000c44090] [0xc000c44060 0xc000c44078 0xc000c44090] [0xc000c44070 0xc000c44088] [0x935700 0x935700] 0xc001fd91a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 23 12:47:46.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:47:47.399: INFO: stderr: "I0123 12:47:47.041305 3841 log.go:172] (0xc000698420) (0xc00058f4a0) Create stream\nI0123 12:47:47.041557 3841 log.go:172] (0xc000698420) (0xc00058f4a0) Stream added, broadcasting: 1\nI0123 12:47:47.046684 3841 log.go:172] (0xc000698420) Reply frame received for 1\nI0123 12:47:47.046726 3841 log.go:172] (0xc000698420) (0xc00031e000) Create stream\nI0123 12:47:47.046738 3841 log.go:172] (0xc000698420) (0xc00031e000) Stream added, broadcasting: 3\nI0123 12:47:47.048038 3841 log.go:172] (0xc000698420) Reply frame received for 3\nI0123 12:47:47.048057 3841 log.go:172] (0xc000698420) (0xc00031e0a0) Create stream\nI0123 12:47:47.048062 3841 log.go:172] (0xc000698420) (0xc00031e0a0) Stream added, broadcasting: 5\nI0123 12:47:47.049024 3841 log.go:172] (0xc000698420) Reply frame received for 5\nI0123 12:47:47.201489 3841 log.go:172] (0xc000698420) Data frame received for 3\nI0123 12:47:47.201584 3841 log.go:172] (0xc00031e000) (3) Data frame handling\nI0123 12:47:47.201600 3841 log.go:172] (0xc00031e000) (3) Data frame sent\nI0123 12:47:47.201654 3841 log.go:172] (0xc000698420) Data frame received for 5\nI0123 12:47:47.201662 3841 log.go:172] (0xc00031e0a0) (5) Data frame handling\nI0123 12:47:47.201675 3841 log.go:172] (0xc00031e0a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0123 12:47:47.387454 3841 log.go:172] (0xc000698420) (0xc00031e0a0) Stream removed, broadcasting: 5\nI0123 12:47:47.387602 3841 log.go:172] (0xc000698420) Data frame received for 1\nI0123 12:47:47.387616 3841 log.go:172] (0xc00058f4a0) (1) Data frame handling\nI0123 12:47:47.387634 3841 log.go:172] (0xc00058f4a0) (1) Data frame sent\nI0123 12:47:47.387643 3841 log.go:172] (0xc000698420) (0xc00058f4a0) Stream removed, broadcasting: 1\nI0123 12:47:47.388511 3841 log.go:172] (0xc000698420) (0xc00031e000) Stream removed, broadcasting: 3\nI0123 12:47:47.388633 3841 log.go:172] (0xc000698420) (0xc00058f4a0) Stream removed, broadcasting: 1\nI0123 12:47:47.388647 3841 log.go:172] (0xc000698420) (0xc00031e000) Stream removed, broadcasting: 3\nI0123 12:47:47.388654 3841 log.go:172] (0xc000698420) (0xc00031e0a0) Stream removed, broadcasting: 5\n" Jan 23 12:47:47.400: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:47:47.400: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:47:47.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:47:47.875: INFO: stderr: "I0123 12:47:47.591530 3863 log.go:172] (0xc0006bc370) (0xc00065d360) Create stream\nI0123 12:47:47.591757 3863 log.go:172] (0xc0006bc370) (0xc00065d360) Stream added, broadcasting: 1\nI0123 12:47:47.596998 3863 log.go:172] (0xc0006bc370) Reply frame received for 1\nI0123 12:47:47.597040 3863 log.go:172] (0xc0006bc370) (0xc0006ba000) Create stream\nI0123 12:47:47.597052 3863 log.go:172] (0xc0006bc370) (0xc0006ba000) Stream added, broadcasting: 3\nI0123 12:47:47.598301 3863 log.go:172] (0xc0006bc370) Reply frame received for 3\nI0123 12:47:47.598325 3863 log.go:172] (0xc0006bc370) (0xc00065d400) Create stream\nI0123 12:47:47.598336 3863 log.go:172] (0xc0006bc370) (0xc00065d400) Stream added, broadcasting: 5\nI0123 12:47:47.599493 3863 log.go:172] (0xc0006bc370) Reply frame received for 5\nI0123 12:47:47.718722 3863 log.go:172] (0xc0006bc370) Data frame received for 3\nI0123 12:47:47.718802 3863 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0123 12:47:47.718843 3863 log.go:172] (0xc0006ba000) (3) Data frame sent\nI0123 12:47:47.721255 3863 log.go:172] (0xc0006bc370) Data frame received for 5\nI0123 12:47:47.721271 3863 log.go:172] (0xc00065d400) (5) Data frame handling\nI0123 12:47:47.721300 3863 log.go:172] (0xc00065d400) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0123 12:47:47.865243 3863 log.go:172] (0xc0006bc370) Data frame received for 1\nI0123 12:47:47.865635 3863 log.go:172] (0xc0006bc370) (0xc0006ba000) Stream removed, broadcasting: 3\nI0123 12:47:47.865742 3863 log.go:172] (0xc00065d360) (1) Data frame handling\nI0123 12:47:47.865779 3863 log.go:172] (0xc00065d360) (1) Data frame sent\nI0123 12:47:47.865823 3863 log.go:172] (0xc0006bc370) (0xc00065d400) Stream removed, broadcasting: 5\nI0123 12:47:47.865843 3863 log.go:172] (0xc0006bc370) (0xc00065d360) Stream removed, broadcasting: 1\nI0123 12:47:47.865856 3863 log.go:172] (0xc0006bc370) Go away received\nI0123 12:47:47.866362 3863 log.go:172] (0xc0006bc370) (0xc00065d360) Stream removed, broadcasting: 1\nI0123 12:47:47.866395 3863 log.go:172] (0xc0006bc370) (0xc0006ba000) Stream removed, broadcasting: 3\nI0123 12:47:47.866428 3863 log.go:172] (0xc0006bc370) (0xc00065d400) Stream removed, broadcasting: 5\n" Jan 23 12:47:47.875: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 23 12:47:47.875: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 23 12:47:47.964: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:47:47.965: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 12:47:47.965: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 23 12:47:47.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:47:48.441: INFO: stderr: "I0123 12:47:48.158238 3885 log.go:172] (0xc00087e160) (0xc0006a1400) Create stream\nI0123 12:47:48.158534 3885 log.go:172] (0xc00087e160) (0xc0006a1400) Stream added, broadcasting: 1\nI0123 12:47:48.166739 3885 log.go:172] (0xc00087e160) Reply frame received for 1\nI0123 12:47:48.166796 3885 log.go:172] (0xc00087e160) (0xc000370000) Create stream\nI0123 12:47:48.166810 3885 log.go:172] (0xc00087e160) (0xc000370000) Stream added, broadcasting: 3\nI0123 12:47:48.168657 3885 log.go:172] (0xc00087e160) Reply frame received for 3\nI0123 12:47:48.168684 3885 log.go:172] (0xc00087e160) (0xc0006a14a0) Create stream\nI0123 12:47:48.168695 3885 log.go:172] (0xc00087e160) (0xc0006a14a0) Stream added, broadcasting: 5\nI0123 12:47:48.171440 3885 log.go:172] (0xc00087e160) Reply frame received for 5\nI0123 12:47:48.329854 3885 log.go:172] (0xc00087e160) Data frame received for 3\nI0123 12:47:48.329929 3885 log.go:172] (0xc000370000) (3) Data frame handling\nI0123 12:47:48.329956 3885 log.go:172] (0xc000370000) (3) Data frame sent\nI0123 12:47:48.430848 3885 log.go:172] (0xc00087e160) (0xc0006a14a0) Stream removed, broadcasting: 5\nI0123 12:47:48.430955 3885 log.go:172] (0xc00087e160) Data frame received for 1\nI0123 12:47:48.430988 3885 log.go:172] (0xc00087e160) (0xc000370000) Stream removed, broadcasting: 3\nI0123 12:47:48.431039 3885 log.go:172] (0xc0006a1400) (1) Data frame handling\nI0123 12:47:48.431061 3885 log.go:172] (0xc0006a1400) (1) Data frame sent\nI0123 12:47:48.431069 3885 log.go:172] (0xc00087e160) (0xc0006a1400) Stream removed, broadcasting: 1\nI0123 12:47:48.431083 3885 log.go:172] (0xc00087e160) Go away received\nI0123 12:47:48.431422 3885 log.go:172] (0xc00087e160) (0xc0006a1400) Stream removed, broadcasting: 1\nI0123 12:47:48.431435 3885 log.go:172] (0xc00087e160) (0xc000370000) Stream removed, broadcasting: 3\nI0123 12:47:48.431443 3885 log.go:172] (0xc00087e160) (0xc0006a14a0) Stream removed, broadcasting: 5\n" Jan 23 12:47:48.441: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:47:48.441: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:47:48.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:47:48.969: INFO: stderr: "I0123 12:47:48.645991 3907 log.go:172] (0xc0006920b0) (0xc0006d6640) Create stream\nI0123 12:47:48.646110 3907 log.go:172] (0xc0006920b0) (0xc0006d6640) Stream added, broadcasting: 1\nI0123 12:47:48.652977 3907 log.go:172] (0xc0006920b0) Reply frame received for 1\nI0123 12:47:48.653053 3907 log.go:172] (0xc0006920b0) (0xc000574dc0) Create stream\nI0123 12:47:48.653062 3907 log.go:172] (0xc0006920b0) (0xc000574dc0) Stream added, broadcasting: 3\nI0123 12:47:48.654219 3907 log.go:172] (0xc0006920b0) Reply frame received for 3\nI0123 12:47:48.654242 3907 log.go:172] (0xc0006920b0) (0xc000140000) Create stream\nI0123 12:47:48.654252 3907 log.go:172] (0xc0006920b0) (0xc000140000) Stream added, broadcasting: 5\nI0123 12:47:48.655210 3907 log.go:172] (0xc0006920b0) Reply frame received for 5\nI0123 12:47:48.774736 3907 log.go:172] (0xc0006920b0) Data frame received for 3\nI0123 12:47:48.774822 3907 log.go:172] (0xc000574dc0) (3) Data frame handling\nI0123 12:47:48.774862 3907 log.go:172] (0xc000574dc0) (3) Data frame sent\nI0123 12:47:48.958645 3907 log.go:172] (0xc0006920b0) (0xc000574dc0) Stream removed, broadcasting: 3\nI0123 12:47:48.958891 3907 log.go:172] (0xc0006920b0) Data frame received for 1\nI0123 12:47:48.958904 3907 log.go:172] (0xc0006d6640) (1) Data frame handling\nI0123 12:47:48.958922 3907 log.go:172] (0xc0006d6640) (1) Data frame sent\nI0123 12:47:48.958965 3907 log.go:172] (0xc0006920b0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0123 12:47:48.959129 3907 log.go:172] (0xc0006920b0) (0xc000140000) Stream removed, broadcasting: 5\nI0123 12:47:48.959303 3907 log.go:172] (0xc0006920b0) Go away received\nI0123 12:47:48.959452 3907 log.go:172] (0xc0006920b0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0123 12:47:48.959503 3907 log.go:172] (0xc0006920b0) (0xc000574dc0) Stream removed, broadcasting: 3\nI0123 12:47:48.959518 3907 log.go:172] (0xc0006920b0) (0xc000140000) Stream removed, broadcasting: 5\n" Jan 23 12:47:48.970: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:47:48.970: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:47:48.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 23 12:47:49.755: INFO: stderr: "I0123 12:47:49.313553 3929 log.go:172] (0xc0006de0b0) (0xc0007006e0) Create stream\nI0123 12:47:49.313762 3929 log.go:172] (0xc0006de0b0) (0xc0007006e0) Stream added, broadcasting: 1\nI0123 12:47:49.321920 3929 log.go:172] (0xc0006de0b0) Reply frame received for 1\nI0123 12:47:49.321997 3929 log.go:172] (0xc0006de0b0) (0xc0001d6460) Create stream\nI0123 12:47:49.322021 3929 log.go:172] (0xc0006de0b0) (0xc0001d6460) Stream added, broadcasting: 3\nI0123 12:47:49.324157 3929 log.go:172] (0xc0006de0b0) Reply frame received for 3\nI0123 12:47:49.324200 3929 log.go:172] (0xc0006de0b0) (0xc0001d6500) Create stream\nI0123 12:47:49.324214 3929 log.go:172] (0xc0006de0b0) (0xc0001d6500) Stream added, broadcasting: 5\nI0123 12:47:49.331037 3929 log.go:172] (0xc0006de0b0) Reply frame received for 5\nI0123 12:47:49.613106 3929 log.go:172] (0xc0006de0b0) Data frame received for 3\nI0123 12:47:49.613153 3929 log.go:172] (0xc0001d6460) (3) Data frame handling\nI0123 12:47:49.613188 3929 log.go:172] (0xc0001d6460) (3) Data frame sent\nI0123 12:47:49.738776 3929 log.go:172] (0xc0006de0b0) Data frame received for 1\nI0123 12:47:49.738860 3929 log.go:172] (0xc0007006e0) (1) Data frame handling\nI0123 12:47:49.738899 3929 log.go:172] (0xc0007006e0) (1) Data frame sent\nI0123 12:47:49.739052 3929 log.go:172] (0xc0006de0b0) (0xc0007006e0) Stream removed, broadcasting: 1\nI0123 12:47:49.741031 3929 log.go:172] (0xc0006de0b0) (0xc0001d6460) Stream removed, broadcasting: 3\nI0123 12:47:49.741657 3929 log.go:172] (0xc0006de0b0) (0xc0001d6500) Stream removed, broadcasting: 5\nI0123 12:47:49.741848 3929 log.go:172] (0xc0006de0b0) Go away received\nI0123 12:47:49.742014 3929 log.go:172] (0xc0006de0b0) (0xc0007006e0) Stream removed, broadcasting: 1\nI0123 12:47:49.742091 3929 log.go:172] (0xc0006de0b0) (0xc0001d6460) Stream removed, broadcasting: 3\nI0123 12:47:49.742116 3929 log.go:172] (0xc0006de0b0) (0xc0001d6500) Stream removed, broadcasting: 5\n" Jan 23 12:47:49.755: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 23 12:47:49.755: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 23 12:47:49.755: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:47:49.770: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 23 12:47:59.797: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:47:59.797: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:47:59.797: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 12:47:59.842: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:47:59.842: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:47:59.842: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:47:59.842: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:47:59.842: INFO: Jan 23 12:47:59.842: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:02.219: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:02.220: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:02.220: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:02.220: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:02.220: INFO: Jan 23 12:48:02.220: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:03.253: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:03.253: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:03.253: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:03.253: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:03.253: INFO: Jan 23 12:48:03.253: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:04.274: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:04.274: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:04.274: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:04.274: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:04.274: INFO: Jan 23 12:48:04.274: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:05.362: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:05.362: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:05.363: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:05.363: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:05.363: INFO: Jan 23 12:48:05.363: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:07.260: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:07.261: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:07.261: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:07.261: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:07.261: INFO: Jan 23 12:48:07.261: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:08.282: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:08.283: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:08.283: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:08.283: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:08.283: INFO: Jan 23 12:48:08.283: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 12:48:09.295: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 12:48:09.295: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:46:53 +0000 UTC }] Jan 23 12:48:09.295: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:47:24 +0000 UTC }] Jan 23 12:48:09.295: INFO: Jan 23 12:48:09.295: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-j58x4 Jan 23 12:48:10.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:48:10.613: INFO: rc: 1 Jan 23 12:48:10.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000c41ce0 exit status 1 true [0xc000b4a400 0xc000b4a458 0xc000b4a498] [0xc000b4a400 0xc000b4a458 0xc000b4a498] [0xc000b4a430 0xc000b4a488] [0x935700 0x935700] 0xc0015898c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 23 12:48:20.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:48:20.718: INFO: rc: 1 Jan 23 12:48:20.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c41f80 exit status 1 true [0xc000b4a4d0 0xc000b4a530 0xc000b4a580] [0xc000b4a4d0 0xc000b4a530 0xc000b4a580] [0xc000b4a510 0xc000b4a570] [0x935700 0x935700] 0xc001589da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:48:30.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:48:30.901: INFO: rc: 1 Jan 23 12:48:30.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014f0cc0 exit status 1 true [0xc000c3e180 0xc000c3e1a0 0xc000c3e1e8] [0xc000c3e180 0xc000c3e1a0 0xc000c3e1e8] [0xc000c3e190 0xc000c3e1d0] [0x935700 0x935700] 0xc0017e4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:48:40.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:48:41.050: INFO: rc: 1 Jan 23 12:48:41.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014f1020 exit status 1 true [0xc000c3e200 0xc000c3e268 0xc000c3e2c8] [0xc000c3e200 0xc000c3e268 0xc000c3e2c8] [0xc000c3e250 0xc000c3e298] [0x935700 0x935700] 0xc0017e47e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:48:51.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:48:51.207: INFO: rc: 1 Jan 23 12:48:51.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000852210 exit status 1 true [0xc001610000 0xc001610030 0xc001610048] [0xc001610000 0xc001610030 0xc001610048] [0xc001610028 0xc001610040] [0x935700 0x935700] 0xc001ab87e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:01.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:01.382: INFO: rc: 1 Jan 23 12:49:01.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017a0150 exit status 1 true [0xc000b4a598 0xc000b4a5e0 0xc000b4a618] [0xc000b4a598 0xc000b4a5e0 0xc000b4a618] [0xc000b4a5c0 0xc000b4a608] [0x935700 0x935700] 0xc001296300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:11.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:11.538: INFO: rc: 1 Jan 23 12:49:11.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000852360 exit status 1 true [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610080 0xc0016100a8] [0x935700 0x935700] 0xc001ab9440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:21.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:21.701: INFO: rc: 1 Jan 23 12:49:21.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fdbbf0 exit status 1 true [0xc000c44210 0xc000c44228 0xc000c44240] [0xc000c44210 0xc000c44228 0xc000c44240] [0xc000c44220 0xc000c44238] [0x935700 0x935700] 0xc001ebcc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:31.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:31.915: INFO: rc: 1 Jan 23 12:49:31.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cb00f0 exit status 1 true [0xc0004660d0 0xc000466150 0xc0004661a8] [0xc0004660d0 0xc000466150 0xc0004661a8] [0xc000466118 0xc000466198] [0x935700 0x935700] 0xc0007f21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:41.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:42.337: INFO: rc: 1 Jan 23 12:49:42.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40150 exit status 1 true [0xc00000e100 0xc000b4a018 0xc000b4a068] [0xc00000e100 0xc000b4a018 0xc000b4a068] [0xc000b4a008 0xc000b4a048] [0x935700 0x935700] 0xc0015891a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:49:52.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:49:52.558: INFO: rc: 1 Jan 23 12:49:52.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c150 exit status 1 true [0xc0004661c8 0xc0004661f8 0xc000466278] [0xc0004661c8 0xc0004661f8 0xc000466278] [0xc0004661e8 0xc000466258] [0x935700 0x935700] 0xc00179e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:02.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:02.689: INFO: rc: 1 Jan 23 12:50:02.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c76b0 exit status 1 true [0xc001610000 0xc001610030 0xc001610048] [0xc001610000 0xc001610030 0xc001610048] [0xc001610028 0xc001610040] [0x935700 0x935700] 0xc001ee41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:12.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:12.850: INFO: rc: 1 Jan 23 12:50:12.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c78c0 exit status 1 true [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610050 0xc001610088 0xc0016100b0] [0xc001610080 0xc0016100a8] [0x935700 0x935700] 0xc001ee4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:22.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:22.977: INFO: rc: 1 Jan 23 12:50:22.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c7b00 exit status 1 true [0xc0016100b8 0xc0016100d0 0xc0016100e8] [0xc0016100b8 0xc0016100d0 0xc0016100e8] [0xc0016100c8 0xc0016100e0] [0x935700 0x935700] 0xc001ee4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:32.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:33.087: INFO: rc: 1 Jan 23 12:50:33.088: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40420 exit status 1 true [0xc000b4a078 0xc000b4a0c8 0xc000b4a108] [0xc000b4a078 0xc000b4a0c8 0xc000b4a108] [0xc000b4a0a8 0xc000b4a0f8] [0x935700 0x935700] 0xc001589620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:43.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:43.221: INFO: rc: 1 Jan 23 12:50:43.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c7c50 exit status 1 true [0xc0016100f0 0xc001610108 0xc001610120] [0xc0016100f0 0xc001610108 0xc001610120] [0xc001610100 0xc001610118] [0x935700 0x935700] 0xc001fd8000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:50:53.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:50:53.377: INFO: rc: 1 Jan 23 12:50:53.377: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c2a0 exit status 1 true [0xc0004662d8 0xc000466398 0xc000466410] [0xc0004662d8 0xc000466398 0xc000466410] [0xc000466338 0xc0004663f0] [0x935700 0x935700] 0xc00179e4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:03.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:03.521: INFO: rc: 1 Jan 23 12:51:03.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c3c0 exit status 1 true [0xc000466428 0xc000466548 0xc0004665c8] [0xc000466428 0xc000466548 0xc0004665c8] [0xc0004664f8 0xc000466588] [0x935700 0x935700] 0xc001036720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:13.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:13.709: INFO: rc: 1 Jan 23 12:51:13.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c540 exit status 1 true [0xc0004665f0 0xc000466620 0xc000466668] [0xc0004665f0 0xc000466620 0xc000466668] [0xc000466610 0xc000466640] [0x935700 0x935700] 0xc001036cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:23.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:23.859: INFO: rc: 1 Jan 23 12:51:23.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c7e30 exit status 1 true [0xc001610128 0xc001610140 0xc001610158] [0xc001610128 0xc001610140 0xc001610158] [0xc001610138 0xc001610150] [0x935700 0x935700] 0xc001fd8a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:33.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:34.101: INFO: rc: 1 Jan 23 12:51:34.101: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c180 exit status 1 true [0xc00000e318 0xc000466108 0xc000466180] [0xc00000e318 0xc000466108 0xc000466180] [0xc0004660d0 0xc000466150] [0x935700 0x935700] 0xc001ee41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:44.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:44.219: INFO: rc: 1 Jan 23 12:51:44.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004c7710 exit status 1 true [0xc001610000 0xc001610030 0xc001610048] [0xc001610000 0xc001610030 0xc001610048] [0xc001610028 0xc001610040] [0x935700 0x935700] 0xc00179e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:51:54.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:51:54.357: INFO: rc: 1 Jan 23 12:51:54.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40180 exit status 1 true [0xc000b4a008 0xc000b4a048 0xc000b4a098] [0xc000b4a008 0xc000b4a048 0xc000b4a098] [0xc000b4a038 0xc000b4a078] [0x935700 0x935700] 0xc001036660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:04.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:04.528: INFO: rc: 1 Jan 23 12:52:04.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000310d20 exit status 1 true [0xc000c44000 0xc000c44018 0xc000c44030] [0xc000c44000 0xc000c44018 0xc000c44030] [0xc000c44010 0xc000c44028] [0x935700 0x935700] 0xc001fd8960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:14.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:14.689: INFO: rc: 1 Jan 23 12:52:14.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40480 exit status 1 true [0xc000b4a0a8 0xc000b4a0f8 0xc000b4a138] [0xc000b4a0a8 0xc000b4a0f8 0xc000b4a138] [0xc000b4a0d8 0xc000b4a128] [0x935700 0x935700] 0xc001036c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:24.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:24.818: INFO: rc: 1 Jan 23 12:52:24.818: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000310e70 exit status 1 true [0xc000c44038 0xc000c44050 0xc000c44068] [0xc000c44038 0xc000c44050 0xc000c44068] [0xc000c44048 0xc000c44060] [0x935700 0x935700] 0xc001fd8fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:34.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:34.981: INFO: rc: 1 Jan 23 12:52:34.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c2d0 exit status 1 true [0xc000466198 0xc0004661d0 0xc000466228] [0xc000466198 0xc0004661d0 0xc000466228] [0xc0004661c8 0xc0004661f8] [0x935700 0x935700] 0xc001ee4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:44.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:45.160: INFO: rc: 1 Jan 23 12:52:45.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40840 exit status 1 true [0xc000b4a150 0xc000b4a188 0xc000b4a1d0] [0xc000b4a150 0xc000b4a188 0xc000b4a1d0] [0xc000b4a178 0xc000b4a1b8] [0x935700 0x935700] 0xc001037620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:52:55.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:52:55.296: INFO: rc: 1 Jan 23 12:52:55.297: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000310fc0 exit status 1 true [0xc000c44070 0xc000c44088 0xc000c440a0] [0xc000c44070 0xc000c44088 0xc000c440a0] [0xc000c44080 0xc000c44098] [0x935700 0x935700] 0xc001fd98c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:53:05.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:53:05.451: INFO: rc: 1 Jan 23 12:53:05.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c4b0 exit status 1 true [0xc000466258 0xc000466328 0xc0004663c0] [0xc000466258 0xc000466328 0xc0004663c0] [0xc0004662d8 0xc000466398] [0x935700 0x935700] 0xc001ee4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 12:53:15.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j58x4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 23 12:53:15.618: INFO: rc: 1 Jan 23 12:53:15.618: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 23 12:53:15.618: INFO: Scaling statefulset ss to 0 Jan 23 12:53:15.646: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 23 12:53:15.650: INFO: Deleting all statefulset in ns e2e-tests-statefulset-j58x4 Jan 23 12:53:15.654: INFO: Scaling statefulset ss to 0 Jan 23 12:53:15.666: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:53:15.669: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:53:15.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-j58x4" for this suite. Jan 23 12:53:23.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:53:23.977: INFO: namespace: e2e-tests-statefulset-j58x4, resource: bindings, ignored listing per whitelist Jan 23 12:53:23.977: INFO: namespace e2e-tests-statefulset-j58x4 deletion completed in 8.276904142s • [SLOW TEST:391.253 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:53:23.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 23 12:53:24.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-b97qd" to be "success or failure" Jan 23 12:53:24.377: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.722041ms Jan 23 12:53:26.432: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071875419s Jan 23 12:53:28.459: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099173045s Jan 23 12:53:30.601: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240306143s Jan 23 12:53:32.646: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.286227867s Jan 23 12:53:34.674: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313337896s STEP: Saw pod success Jan 23 12:53:34.674: INFO: Pod "downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:53:34.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005 container client-container: STEP: delete the pod Jan 23 12:53:34.909: INFO: Waiting for pod downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005 to disappear Jan 23 12:53:34.918: INFO: Pod downwardapi-volume-5775ddd4-3ddf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:53:34.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-b97qd" for this suite. Jan 23 12:53:43.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:53:43.391: INFO: namespace: e2e-tests-downward-api-b97qd, resource: bindings, ignored listing per whitelist Jan 23 12:53:43.395: INFO: namespace e2e-tests-downward-api-b97qd deletion completed in 8.459696384s • [SLOW TEST:19.418 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:53:43.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-63089593-3ddf-11ea-bb65-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-63089673-3ddf-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-63089593-3ddf-11ea-bb65-0242ac110005 STEP: Updating configmap cm-test-opt-upd-63089673-3ddf-11ea-bb65-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-630896cb-3ddf-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:55:29.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-87zhf" for this suite. Jan 23 12:56:09.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:56:09.557: INFO: namespace: e2e-tests-projected-87zhf, resource: bindings, ignored listing per whitelist Jan 23 12:56:09.565: INFO: namespace e2e-tests-projected-87zhf deletion completed in 40.211997599s • [SLOW TEST:146.169 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:56:09.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-25l98 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-25l98 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-25l98 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-25l98 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-25l98 Jan 23 12:56:24.757: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-25l98, name: ss-0, uid: c262e664-3ddf-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 23 12:56:32.494: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-25l98, name: ss-0, uid: c262e664-3ddf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 12:56:32.659: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-25l98, name: ss-0, uid: c262e664-3ddf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 12:56:32.713: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-25l98 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-25l98 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-25l98 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 23 12:56:46.660: INFO: Deleting all statefulset in ns e2e-tests-statefulset-25l98 Jan 23 12:56:46.673: INFO: Scaling statefulset ss to 0 Jan 23 12:57:06.713: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 12:57:06.719: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:57:06.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-25l98" for this suite. Jan 23 12:57:14.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:57:15.127: INFO: namespace: e2e-tests-statefulset-25l98, resource: bindings, ignored listing per whitelist Jan 23 12:57:15.180: INFO: namespace e2e-tests-statefulset-25l98 deletion completed in 8.287504057s • [SLOW TEST:65.614 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:57:15.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-e14d0a7d-3ddf-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 12:57:15.699: INFO: Waiting up to 5m0s for pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-mdndh" to be "success or failure" Jan 23 12:57:15.709: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.251407ms Jan 23 12:57:18.024: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324839918s Jan 23 12:57:20.039: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338986685s Jan 23 12:57:22.063: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36334743s Jan 23 12:57:24.077: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377012217s Jan 23 12:57:26.087: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.386988095s Jan 23 12:57:28.105: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.405241862s STEP: Saw pod success Jan 23 12:57:28.105: INFO: Pod "pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:57:28.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 12:57:29.432: INFO: Waiting for pod pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005 to disappear Jan 23 12:57:29.448: INFO: Pod pod-secrets-e14fdfbe-3ddf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:57:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mdndh" for this suite. Jan 23 12:57:37.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:57:37.549: INFO: namespace: e2e-tests-secrets-mdndh, resource: bindings, ignored listing per whitelist Jan 23 12:57:37.785: INFO: namespace e2e-tests-secrets-mdndh deletion completed in 8.326878728s • [SLOW TEST:22.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:57:37.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 23 12:57:38.086: INFO: Waiting up to 5m0s for pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-mkp95" to be "success or failure" Jan 23 12:57:38.113: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.56125ms Jan 23 12:57:40.151: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065002886s Jan 23 12:57:42.249: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163026909s Jan 23 12:57:45.127: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.040252171s Jan 23 12:57:48.293: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206943561s Jan 23 12:57:50.308: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221911017s Jan 23 12:57:52.325: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.238918105s STEP: Saw pod success Jan 23 12:57:52.325: INFO: Pod "downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 12:57:52.334: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 12:57:52.707: INFO: Waiting for pod downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005 to disappear Jan 23 12:57:52.742: INFO: Pod downward-api-eebd6425-3ddf-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:57:52.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mkp95" for this suite. Jan 23 12:57:58.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:57:59.057: INFO: namespace: e2e-tests-downward-api-mkp95, resource: bindings, ignored listing per whitelist Jan 23 12:57:59.124: INFO: namespace e2e-tests-downward-api-mkp95 deletion completed in 6.368220224s • [SLOW TEST:21.339 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:57:59.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-fb7fc9e0-3ddf-11ea-bb65-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-fb7fcaa8-3ddf-11ea-bb65-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fb7fc9e0-3ddf-11ea-bb65-0242ac110005 STEP: Updating configmap cm-test-opt-upd-fb7fcaa8-3ddf-11ea-bb65-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-fb7fcb64-3ddf-11ea-bb65-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 12:59:21.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bxql5" for this suite. Jan 23 12:59:59.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:59:59.856: INFO: namespace: e2e-tests-configmap-bxql5, resource: bindings, ignored listing per whitelist Jan 23 12:59:59.912: INFO: namespace e2e-tests-configmap-bxql5 deletion completed in 38.318636334s • [SLOW TEST:120.787 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 12:59:59.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 13:00:00.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-6dm9q' Jan 23 13:00:02.990: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 13:00:02.990: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 23 13:00:07.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6dm9q' Jan 23 13:00:07.837: INFO: stderr: "" Jan 23 13:00:07.837: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:00:07.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6dm9q" for this suite. Jan 23 13:00:14.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:00:14.219: INFO: namespace: e2e-tests-kubectl-6dm9q, resource: bindings, ignored listing per whitelist Jan 23 13:00:14.276: INFO: namespace e2e-tests-kubectl-6dm9q deletion completed in 6.420876888s • [SLOW TEST:14.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:00:14.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4bf9a19f-3de0-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume secrets Jan 23 13:00:14.515: INFO: Waiting up to 5m0s for pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-m8sw2" to be "success or failure" Jan 23 13:00:14.659: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.048885ms Jan 23 13:00:16.685: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169465025s Jan 23 13:00:18.704: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188275558s Jan 23 13:00:20.990: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474949584s Jan 23 13:00:23.072: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556889774s Jan 23 13:00:25.164: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648666389s Jan 23 13:00:27.173: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.657943594s STEP: Saw pod success Jan 23 13:00:27.173: INFO: Pod "pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 13:00:27.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 23 13:00:27.402: INFO: Waiting for pod pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005 to disappear Jan 23 13:00:27.446: INFO: Pod pod-secrets-4bfb2094-3de0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:00:27.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m8sw2" for this suite. Jan 23 13:00:34.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:00:34.866: INFO: namespace: e2e-tests-secrets-m8sw2, resource: bindings, ignored listing per whitelist Jan 23 13:00:34.882: INFO: namespace e2e-tests-secrets-m8sw2 deletion completed in 7.426083177s • [SLOW TEST:20.606 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:00:34.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 23 13:00:42.972: INFO: 10 pods remaining Jan 23 13:00:42.973: INFO: 10 pods has nil DeletionTimestamp Jan 23 13:00:42.973: INFO: Jan 23 13:00:44.773: INFO: 0 pods remaining Jan 23 13:00:44.773: INFO: 0 pods has nil DeletionTimestamp Jan 23 13:00:44.773: INFO: Jan 23 13:00:44.909: INFO: 0 pods remaining Jan 23 13:00:44.909: INFO: 0 pods has nil DeletionTimestamp Jan 23 13:00:44.909: INFO: STEP: Gathering metrics W0123 13:00:45.818354 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 13:00:45.818: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:00:45.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pmptk" for this suite. Jan 23 13:01:04.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:04.281: INFO: namespace: e2e-tests-gc-pmptk, resource: bindings, ignored listing per whitelist Jan 23 13:01:04.377: INFO: namespace e2e-tests-gc-pmptk deletion completed in 18.553816147s • [SLOW TEST:29.495 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:01:04.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-69ec91cf-3de0-11ea-bb65-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 23 13:01:04.759: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-p4z5d" to be "success or failure" Jan 23 13:01:04.817: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.421457ms Jan 23 13:01:06.847: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087937226s Jan 23 13:01:08.869: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109359756s Jan 23 13:01:12.221: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.462012239s Jan 23 13:01:14.234: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.474275015s Jan 23 13:01:16.253: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.493366911s Jan 23 13:01:18.262: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.502399179s STEP: Saw pod success Jan 23 13:01:18.262: INFO: Pod "pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 13:01:18.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 23 13:01:18.941: INFO: Waiting for pod pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005 to disappear Jan 23 13:01:19.234: INFO: Pod pod-projected-configmaps-69edf964-3de0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:01:19.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p4z5d" for this suite. Jan 23 13:01:27.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:27.494: INFO: namespace: e2e-tests-projected-p4z5d, resource: bindings, ignored listing per whitelist Jan 23 13:01:27.794: INFO: namespace e2e-tests-projected-p4z5d deletion completed in 8.537219896s • [SLOW TEST:23.417 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:01:27.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 23 13:01:28.238: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-b7zzr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7zzr/configmaps/e2e-watch-test-watch-closed,UID:77e236a2-3de0-11ea-a994-fa163e34d433,ResourceVersion:19192879,Generation:0,CreationTimestamp:2020-01-23 13:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 13:01:28.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-b7zzr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7zzr/configmaps/e2e-watch-test-watch-closed,UID:77e236a2-3de0-11ea-a994-fa163e34d433,ResourceVersion:19192880,Generation:0,CreationTimestamp:2020-01-23 13:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 23 13:01:28.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-b7zzr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7zzr/configmaps/e2e-watch-test-watch-closed,UID:77e236a2-3de0-11ea-a994-fa163e34d433,ResourceVersion:19192881,Generation:0,CreationTimestamp:2020-01-23 13:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 13:01:28.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-b7zzr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b7zzr/configmaps/e2e-watch-test-watch-closed,UID:77e236a2-3de0-11ea-a994-fa163e34d433,ResourceVersion:19192882,Generation:0,CreationTimestamp:2020-01-23 13:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:01:28.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-b7zzr" for this suite. Jan 23 13:01:34.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:34.919: INFO: namespace: e2e-tests-watch-b7zzr, resource: bindings, ignored listing per whitelist Jan 23 13:01:34.939: INFO: namespace e2e-tests-watch-b7zzr deletion completed in 6.455643375s • [SLOW TEST:7.144 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:01:34.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 23 13:01:35.286: INFO: Waiting up to 5m0s for pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005" in namespace "e2e-tests-containers-d8v7t" to be "success or failure" Jan 23 13:01:35.297: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.826246ms Jan 23 13:01:37.313: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027257337s Jan 23 13:01:39.328: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042082578s Jan 23 13:01:41.651: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365337939s Jan 23 13:01:43.718: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432355767s Jan 23 13:01:45.761: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.475232897s STEP: Saw pod success Jan 23 13:01:45.761: INFO: Pod "client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 13:01:45.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 13:01:46.327: INFO: Waiting for pod client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005 to disappear Jan 23 13:01:46.582: INFO: Pod client-containers-7c0ee047-3de0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:01:46.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-d8v7t" for this suite. Jan 23 13:01:54.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:54.617: INFO: namespace: e2e-tests-containers-d8v7t, resource: bindings, ignored listing per whitelist Jan 23 13:01:54.670: INFO: namespace e2e-tests-containers-d8v7t deletion completed in 8.03262934s • [SLOW TEST:19.730 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:01:54.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 23 13:01:54.885: INFO: Waiting up to 5m0s for pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-rbq6r" to be "success or failure" Jan 23 13:01:54.901: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.248086ms Jan 23 13:01:56.924: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038975676s Jan 23 13:01:58.941: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055218009s Jan 23 13:02:00.999: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114136913s Jan 23 13:02:03.937: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.051226723s Jan 23 13:02:05.959: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.073554154s STEP: Saw pod success Jan 23 13:02:05.959: INFO: Pod "pod-87ce7c80-3de0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 13:02:05.965: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-87ce7c80-3de0-11ea-bb65-0242ac110005 container test-container: STEP: delete the pod Jan 23 13:02:07.622: INFO: Waiting for pod pod-87ce7c80-3de0-11ea-bb65-0242ac110005 to disappear Jan 23 13:02:07.639: INFO: Pod pod-87ce7c80-3de0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:02:07.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rbq6r" for this suite. Jan 23 13:02:14.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:14.408: INFO: namespace: e2e-tests-emptydir-rbq6r, resource: bindings, ignored listing per whitelist Jan 23 13:02:14.415: INFO: namespace e2e-tests-emptydir-rbq6r deletion completed in 6.586268474s • [SLOW TEST:19.745 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:02:14.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 23 13:02:14.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 23 13:02:15.004: INFO: stderr: "" Jan 23 13:02:15.004: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:02:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-88wk4" for this suite. Jan 23 13:02:21.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:21.200: INFO: namespace: e2e-tests-kubectl-88wk4, resource: bindings, ignored listing per whitelist Jan 23 13:02:21.248: INFO: namespace e2e-tests-kubectl-88wk4 deletion completed in 6.234043444s • [SLOW TEST:6.832 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:02:21.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 13:02:21.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5zlz4' Jan 23 13:02:21.577: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 13:02:21.577: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 23 13:02:23.632: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-tkb68] Jan 23 13:02:23.632: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-tkb68" in namespace "e2e-tests-kubectl-5zlz4" to be "running and ready" Jan 23 13:02:23.640: INFO: Pod "e2e-test-nginx-rc-tkb68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606768ms Jan 23 13:02:25.656: INFO: Pod "e2e-test-nginx-rc-tkb68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023756636s Jan 23 13:02:27.678: INFO: Pod "e2e-test-nginx-rc-tkb68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045862445s Jan 23 13:02:29.689: INFO: Pod "e2e-test-nginx-rc-tkb68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05727791s Jan 23 13:02:31.709: INFO: Pod "e2e-test-nginx-rc-tkb68": Phase="Running", Reason="", readiness=true. Elapsed: 8.077488726s Jan 23 13:02:31.709: INFO: Pod "e2e-test-nginx-rc-tkb68" satisfied condition "running and ready" Jan 23 13:02:31.710: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-tkb68] Jan 23 13:02:31.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5zlz4' Jan 23 13:02:32.001: INFO: stderr: "" Jan 23 13:02:32.001: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 23 13:02:32.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5zlz4' Jan 23 13:02:32.432: INFO: stderr: "" Jan 23 13:02:32.432: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:02:32.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5zlz4" for this suite. Jan 23 13:02:56.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:56.711: INFO: namespace: e2e-tests-kubectl-5zlz4, resource: bindings, ignored listing per whitelist Jan 23 13:02:56.742: INFO: namespace e2e-tests-kubectl-5zlz4 deletion completed in 24.235623926s • [SLOW TEST:35.493 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:02:56.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:03:12.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-g9j6h" for this suite. Jan 23 13:03:36.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:03:36.305: INFO: namespace: e2e-tests-replication-controller-g9j6h, resource: bindings, ignored listing per whitelist Jan 23 13:03:36.425: INFO: namespace e2e-tests-replication-controller-g9j6h deletion completed in 24.363433935s • [SLOW TEST:39.683 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:03:36.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 23 13:03:36.875: INFO: Waiting up to 5m0s for pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005" in namespace "e2e-tests-var-expansion-kw7hc" to be "success or failure" Jan 23 13:03:37.020: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.538116ms Jan 23 13:03:39.470: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595025105s Jan 23 13:03:41.489: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613409918s Jan 23 13:03:44.249: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.373459143s Jan 23 13:03:46.299: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.42397869s Jan 23 13:03:48.313: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.437458602s Jan 23 13:03:50.972: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.09680418s STEP: Saw pod success Jan 23 13:03:50.972: INFO: Pod "var-expansion-c4998790-3de0-11ea-bb65-0242ac110005" satisfied condition "success or failure" Jan 23 13:03:50.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c4998790-3de0-11ea-bb65-0242ac110005 container dapi-container: STEP: delete the pod Jan 23 13:03:51.242: INFO: Waiting for pod var-expansion-c4998790-3de0-11ea-bb65-0242ac110005 to disappear Jan 23 13:03:51.255: INFO: Pod var-expansion-c4998790-3de0-11ea-bb65-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 23 13:03:51.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kw7hc" for this suite. Jan 23 13:03:57.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:03:57.585: INFO: namespace: e2e-tests-var-expansion-kw7hc, resource: bindings, ignored listing per whitelist Jan 23 13:03:57.607: INFO: namespace e2e-tests-var-expansion-kw7hc deletion completed in 6.337072584s • [SLOW TEST:21.182 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 23 13:03:57.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 23 13:03:57.814: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.597265ms)
Jan 23 13:03:57.824: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.64559ms)
Jan 23 13:03:57.832: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.988548ms)
Jan 23 13:03:57.838: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.435764ms)
Jan 23 13:03:57.844: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.7906ms)
Jan 23 13:03:57.852: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.655588ms)
Jan 23 13:03:57.891: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.473765ms)
Jan 23 13:03:57.898: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.65689ms)
Jan 23 13:03:57.904: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.38113ms)
Jan 23 13:03:57.911: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.569916ms)
Jan 23 13:03:57.916: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.592333ms)
Jan 23 13:03:57.922: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.484044ms)
Jan 23 13:03:57.927: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.794772ms)
Jan 23 13:03:57.933: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.947909ms)
Jan 23 13:03:57.938: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.467491ms)
Jan 23 13:03:57.945: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.630329ms)
Jan 23 13:03:57.952: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.062167ms)
Jan 23 13:03:57.959: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.024761ms)
Jan 23 13:03:57.971: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.535012ms)
Jan 23 13:03:57.983: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.805305ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:03:57.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kgzn5" for this suite.
Jan 23 13:04:04.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:04:04.063: INFO: namespace: e2e-tests-proxy-kgzn5, resource: bindings, ignored listing per whitelist
Jan 23 13:04:04.124: INFO: namespace e2e-tests-proxy-kgzn5 deletion completed in 6.134654428s

• [SLOW TEST:6.516 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:04:04.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 23 13:04:24.509: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:24.610: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:26.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:26.630: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:28.614: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:28.644: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:30.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:30.647: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:32.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:32.656: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:34.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:34.628: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:36.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:36.634: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:38.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:38.652: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:40.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:40.645: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:42.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:43.031: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:44.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:44.633: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:46.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:46.629: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:48.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:48.648: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:50.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:50.657: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:04:52.612: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:04:52.650: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:04:52.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7gg8j" for this suite.
Jan 23 13:05:16.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:05:16.834: INFO: namespace: e2e-tests-container-lifecycle-hook-7gg8j, resource: bindings, ignored listing per whitelist
Jan 23 13:05:16.927: INFO: namespace e2e-tests-container-lifecycle-hook-7gg8j deletion completed in 24.241110901s

• [SLOW TEST:72.803 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:05:16.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dnm54
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 13:05:17.121: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 13:05:55.649: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dnm54 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:05:55.649: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:05:55.794772       8 log.go:172] (0xc001b4a420) (0xc0027563c0) Create stream
I0123 13:05:55.795031       8 log.go:172] (0xc001b4a420) (0xc0027563c0) Stream added, broadcasting: 1
I0123 13:05:55.817775       8 log.go:172] (0xc001b4a420) Reply frame received for 1
I0123 13:05:55.817857       8 log.go:172] (0xc001b4a420) (0xc0026083c0) Create stream
I0123 13:05:55.817873       8 log.go:172] (0xc001b4a420) (0xc0026083c0) Stream added, broadcasting: 3
I0123 13:05:55.824630       8 log.go:172] (0xc001b4a420) Reply frame received for 3
I0123 13:05:55.824695       8 log.go:172] (0xc001b4a420) (0xc002608460) Create stream
I0123 13:05:55.824725       8 log.go:172] (0xc001b4a420) (0xc002608460) Stream added, broadcasting: 5
I0123 13:05:55.829860       8 log.go:172] (0xc001b4a420) Reply frame received for 5
I0123 13:05:56.148048       8 log.go:172] (0xc001b4a420) Data frame received for 3
I0123 13:05:56.148182       8 log.go:172] (0xc0026083c0) (3) Data frame handling
I0123 13:05:56.148222       8 log.go:172] (0xc0026083c0) (3) Data frame sent
I0123 13:05:56.282483       8 log.go:172] (0xc001b4a420) Data frame received for 1
I0123 13:05:56.282623       8 log.go:172] (0xc0027563c0) (1) Data frame handling
I0123 13:05:56.282672       8 log.go:172] (0xc0027563c0) (1) Data frame sent
I0123 13:05:56.283393       8 log.go:172] (0xc001b4a420) (0xc002608460) Stream removed, broadcasting: 5
I0123 13:05:56.283983       8 log.go:172] (0xc001b4a420) (0xc0026083c0) Stream removed, broadcasting: 3
I0123 13:05:56.284082       8 log.go:172] (0xc001b4a420) (0xc0027563c0) Stream removed, broadcasting: 1
I0123 13:05:56.284126       8 log.go:172] (0xc001b4a420) Go away received
I0123 13:05:56.284732       8 log.go:172] (0xc001b4a420) (0xc0027563c0) Stream removed, broadcasting: 1
I0123 13:05:56.284792       8 log.go:172] (0xc001b4a420) (0xc0026083c0) Stream removed, broadcasting: 3
I0123 13:05:56.284816       8 log.go:172] (0xc001b4a420) (0xc002608460) Stream removed, broadcasting: 5
Jan 23 13:05:56.285: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:05:56.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dnm54" for this suite.
Jan 23 13:06:20.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:06:20.616: INFO: namespace: e2e-tests-pod-network-test-dnm54, resource: bindings, ignored listing per whitelist
Jan 23 13:06:20.670: INFO: namespace e2e-tests-pod-network-test-dnm54 deletion completed in 24.364657317s

• [SLOW TEST:63.742 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:06:20.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 23 13:06:20.932: INFO: Creating deployment "test-recreate-deployment"
Jan 23 13:06:20.940: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 23 13:06:20.998: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 23 13:06:23.026: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 23 13:06:23.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:06:25.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:06:27.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:06:29.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:06:31.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381581, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:06:33.050: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 23 13:06:33.071: INFO: Updating deployment test-recreate-deployment
Jan 23 13:06:33.071: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 23 13:06:33.971: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-d9spn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d9spn/deployments/test-recreate-deployment,UID:26656d82-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193526,Generation:2,CreationTimestamp:2020-01-23 13:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-23 13:06:33 +0000 UTC 2020-01-23 13:06:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-23 13:06:33 +0000 UTC 2020-01-23 13:06:21 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 23 13:06:34.015: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-d9spn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d9spn/replicasets/test-recreate-deployment-589c4bfd,UID:2dd5f5ff-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193524,Generation:1,CreationTimestamp:2020-01-23 13:06:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 26656d82-3de1-11ea-a994-fa163e34d433 0xc0013834df 0xc0013834f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:06:34.015: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 23 13:06:34.016: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-d9spn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d9spn/replicasets/test-recreate-deployment-5bf7f65dc,UID:266f91ab-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193515,Generation:2,CreationTimestamp:2020-01-23 13:06:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 26656d82-3de1-11ea-a994-fa163e34d433 0xc0013835b0 0xc0013835b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:06:35.695: INFO: Pod "test-recreate-deployment-589c4bfd-z26jm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-z26jm,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-d9spn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d9spn/pods/test-recreate-deployment-589c4bfd-z26jm,UID:2ddcd691-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193527,Generation:0,CreationTimestamp:2020-01-23 13:06:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 2dd5f5ff-3de1-11ea-a994-fa163e34d433 0xc0016db11f 0xc0016db130}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k8bxm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k8bxm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k8bxm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016db190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016db1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:06:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:06:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-23 13:06:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:06:35.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-d9spn" for this suite.
Jan 23 13:06:44.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:06:44.833: INFO: namespace: e2e-tests-deployment-d9spn, resource: bindings, ignored listing per whitelist
Jan 23 13:06:44.934: INFO: namespace e2e-tests-deployment-d9spn deletion completed in 9.21635291s

• [SLOW TEST:24.263 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:06:44.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 23 13:06:45.252: INFO: Waiting up to 5m0s for pod "pod-34e09468-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-g45zp" to be "success or failure"
Jan 23 13:06:45.267: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.971304ms
Jan 23 13:06:48.065: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81248764s
Jan 23 13:06:50.113: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860817341s
Jan 23 13:06:54.233: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981081533s
Jan 23 13:06:56.250: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.997563541s
Jan 23 13:06:58.272: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.020073301s
STEP: Saw pod success
Jan 23 13:06:58.272: INFO: Pod "pod-34e09468-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:06:58.294: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-34e09468-3de1-11ea-bb65-0242ac110005 container test-container: 
STEP: delete the pod
Jan 23 13:06:58.428: INFO: Waiting for pod pod-34e09468-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:06:58.433: INFO: Pod pod-34e09468-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:06:58.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-g45zp" for this suite.
Jan 23 13:07:04.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:07:04.750: INFO: namespace: e2e-tests-emptydir-g45zp, resource: bindings, ignored listing per whitelist
Jan 23 13:07:04.887: INFO: namespace e2e-tests-emptydir-g45zp deletion completed in 6.445890452s

• [SLOW TEST:19.952 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:07:04.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-jfkgk/secret-test-40b2b7d3-3de1-11ea-bb65-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 23 13:07:05.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-secrets-jfkgk" to be "success or failure"
Jan 23 13:07:05.087: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.725954ms
Jan 23 13:07:07.363: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286819871s
Jan 23 13:07:09.400: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323929068s
Jan 23 13:07:11.679: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602740572s
Jan 23 13:07:13.737: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.660569419s
Jan 23 13:07:15.746: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.669277986s
STEP: Saw pod success
Jan 23 13:07:15.746: INFO: Pod "pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:07:15.752: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005 container env-test: 
STEP: delete the pod
Jan 23 13:07:17.212: INFO: Waiting for pod pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:07:17.262: INFO: Pod pod-configmaps-40b37488-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:07:17.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jfkgk" for this suite.
Jan 23 13:07:23.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:07:23.433: INFO: namespace: e2e-tests-secrets-jfkgk, resource: bindings, ignored listing per whitelist
Jan 23 13:07:23.537: INFO: namespace e2e-tests-secrets-jfkgk deletion completed in 6.256910933s

• [SLOW TEST:18.648 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:07:23.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 23 13:07:24.088: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 23 13:07:30.001: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 23 13:07:36.028: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 23 13:07:38.055: INFO: Creating deployment "test-rollover-deployment"
Jan 23 13:07:38.080: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 23 13:07:40.935: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 23 13:07:40.982: INFO: Ensure that both replica sets have 1 created replica
Jan 23 13:07:41.000: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 23 13:07:41.101: INFO: Updating deployment test-rollover-deployment
Jan 23 13:07:41.101: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 23 13:07:43.131: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 23 13:07:43.159: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 23 13:07:43.181: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:43.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381662, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:45.334: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:45.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381662, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:47.207: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:47.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381662, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:50.256: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:50.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381662, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:51.201: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:51.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381662, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:53.207: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:53.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381672, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:55.200: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:55.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381672, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:57.198: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:57.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381672, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:07:59.199: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:07:59.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381672, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:08:01.268: INFO: all replica sets need to contain the pod-template-hash label
Jan 23 13:08:01.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381672, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381658, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:08:03.343: INFO: 
Jan 23 13:08:03.343: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 23 13:08:03.358: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2f5fd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f5fd/deployments/test-rollover-deployment,UID:545e1413-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193773,Generation:2,CreationTimestamp:2020-01-23 13:07:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-23 13:07:38 +0000 UTC 2020-01-23 13:07:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-23 13:08:03 +0000 UTC 2020-01-23 13:07:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 23 13:08:03.366: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2f5fd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f5fd/replicasets/test-rollover-deployment-5b8479fdb6,UID:562fb251-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193763,Generation:2,CreationTimestamp:2020-01-23 13:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 545e1413-3de1-11ea-a994-fa163e34d433 0xc00103b127 0xc00103b128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 23 13:08:03.366: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 23 13:08:03.366: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2f5fd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f5fd/replicasets/test-rollover-controller,UID:4be6b601-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193772,Generation:2,CreationTimestamp:2020-01-23 13:07:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 545e1413-3de1-11ea-a994-fa163e34d433 0xc00103ae97 0xc00103ae98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:08:03.366: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2f5fd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f5fd/replicasets/test-rollover-deployment-58494b7559,UID:5466daf0-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193729,Generation:2,CreationTimestamp:2020-01-23 13:07:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 545e1413-3de1-11ea-a994-fa163e34d433 0xc00103b047 0xc00103b048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:08:03.371: INFO: Pod "test-rollover-deployment-5b8479fdb6-bc7fn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-bc7fn,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2f5fd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2f5fd/pods/test-rollover-deployment-5b8479fdb6-bc7fn,UID:56d5cec8-3de1-11ea-a994-fa163e34d433,ResourceVersion:19193748,Generation:0,CreationTimestamp:2020-01-23 13:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 562fb251-3de1-11ea-a994-fa163e34d433 0xc0024c0977 0xc0024c0978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-24pqv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-24pqv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-24pqv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024c0ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024c0b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:07:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:07:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:07:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:07:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-23 13:07:42 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-23 13:07:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://29b9fb68d5a11278cc2e7d70d8cb9f9349991c4a4fa639a6feeabef5ffbc3a90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:08:03.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2f5fd" for this suite.
Jan 23 13:08:14.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:08:14.791: INFO: namespace: e2e-tests-deployment-2f5fd, resource: bindings, ignored listing per whitelist
Jan 23 13:08:15.100: INFO: namespace e2e-tests-deployment-2f5fd deletion completed in 11.721425698s

• [SLOW TEST:51.563 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:08:15.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-6aa824c0-3de1-11ea-bb65-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 23 13:08:15.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-r8xhd" to be "success or failure"
Jan 23 13:08:15.840: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 344.21318ms
Jan 23 13:08:18.316: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819665869s
Jan 23 13:08:20.350: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.853694718s
Jan 23 13:08:22.828: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.331526889s
Jan 23 13:08:24.838: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.341852973s
Jan 23 13:08:26.857: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.361185564s
Jan 23 13:08:28.885: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.389212692s
STEP: Saw pod success
Jan 23 13:08:28.885: INFO: Pod "pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:08:28.890: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 23 13:08:28.959: INFO: Waiting for pod pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:08:29.671: INFO: Pod pod-configmaps-6aa9f371-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:08:29.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r8xhd" for this suite.
Jan 23 13:08:35.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:08:35.779: INFO: namespace: e2e-tests-configmap-r8xhd, resource: bindings, ignored listing per whitelist
Jan 23 13:08:35.922: INFO: namespace e2e-tests-configmap-r8xhd deletion completed in 6.21289217s

• [SLOW TEST:20.820 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:08:35.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-n6rg
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 13:08:36.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n6rg" in namespace "e2e-tests-subpath-5wq5r" to be "success or failure"
Jan 23 13:08:36.601: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 39.527902ms
Jan 23 13:08:38.932: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370997699s
Jan 23 13:08:40.944: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38301294s
Jan 23 13:08:43.359: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.797985523s
Jan 23 13:08:45.417: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.856017334s
Jan 23 13:08:47.442: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.880669527s
Jan 23 13:08:49.453: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.892186008s
Jan 23 13:08:51.606: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.045356629s
Jan 23 13:08:53.632: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.071010857s
Jan 23 13:08:55.644: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 19.083288451s
Jan 23 13:08:57.667: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 21.105965496s
Jan 23 13:08:59.687: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 23.126394677s
Jan 23 13:09:01.706: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 25.145032945s
Jan 23 13:09:03.721: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 27.159717373s
Jan 23 13:09:05.733: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 29.171752373s
Jan 23 13:09:07.751: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 31.190308605s
Jan 23 13:09:09.778: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 33.216932141s
Jan 23 13:09:11.799: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Running", Reason="", readiness=false. Elapsed: 35.237754832s
Jan 23 13:09:13.811: INFO: Pod "pod-subpath-test-secret-n6rg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.250146564s
STEP: Saw pod success
Jan 23 13:09:13.811: INFO: Pod "pod-subpath-test-secret-n6rg" satisfied condition "success or failure"
Jan 23 13:09:13.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-n6rg container test-container-subpath-secret-n6rg: 
STEP: delete the pod
Jan 23 13:09:14.628: INFO: Waiting for pod pod-subpath-test-secret-n6rg to disappear
Jan 23 13:09:14.938: INFO: Pod pod-subpath-test-secret-n6rg no longer exists
STEP: Deleting pod pod-subpath-test-secret-n6rg
Jan 23 13:09:14.938: INFO: Deleting pod "pod-subpath-test-secret-n6rg" in namespace "e2e-tests-subpath-5wq5r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:09:14.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5wq5r" for this suite.
Jan 23 13:09:21.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:09:21.302: INFO: namespace: e2e-tests-subpath-5wq5r, resource: bindings, ignored listing per whitelist
Jan 23 13:09:21.324: INFO: namespace e2e-tests-subpath-5wq5r deletion completed in 6.181474324s

• [SLOW TEST:45.402 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:09:21.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 23 13:09:31.528: INFO: Pod pod-hostip-9201a1b0-3de1-11ea-bb65-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:09:31.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7d4vl" for this suite.
Jan 23 13:09:57.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:09:57.727: INFO: namespace: e2e-tests-pods-7d4vl, resource: bindings, ignored listing per whitelist
Jan 23 13:09:57.813: INFO: namespace e2e-tests-pods-7d4vl deletion completed in 26.27376981s

• [SLOW TEST:36.488 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:09:57.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a7d80370-3de1-11ea-bb65-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 23 13:09:58.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-configmap-m96cc" to be "success or failure"
Jan 23 13:09:58.155: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.207345ms
Jan 23 13:10:00.328: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196910282s
Jan 23 13:10:02.342: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210620864s
Jan 23 13:10:05.442: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.310723074s
Jan 23 13:10:07.455: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.324463245s
Jan 23 13:10:09.486: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.354905192s
STEP: Saw pod success
Jan 23 13:10:09.486: INFO: Pod "pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:10:09.501: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 23 13:10:09.877: INFO: Waiting for pod pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:10:09.888: INFO: Pod pod-configmaps-a7d91b6f-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:10:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m96cc" for this suite.
Jan 23 13:10:16.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:10:16.272: INFO: namespace: e2e-tests-configmap-m96cc, resource: bindings, ignored listing per whitelist
Jan 23 13:10:16.351: INFO: namespace e2e-tests-configmap-m96cc deletion completed in 6.442431064s

• [SLOW TEST:18.538 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:10:16.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 23 13:10:16.783: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.730573ms)
Jan 23 13:10:16.868: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 84.468416ms)
Jan 23 13:10:16.878: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.220573ms)
Jan 23 13:10:16.900: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.947182ms)
Jan 23 13:10:16.909: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.834948ms)
Jan 23 13:10:16.919: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.261339ms)
Jan 23 13:10:16.925: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.432409ms)
Jan 23 13:10:16.935: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.472151ms)
Jan 23 13:10:16.943: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.352758ms)
Jan 23 13:10:16.949: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.352765ms)
Jan 23 13:10:17.004: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.845501ms)
Jan 23 13:10:17.013: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.405308ms)
Jan 23 13:10:17.019: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.787824ms)
Jan 23 13:10:17.024: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.967337ms)
Jan 23 13:10:17.029: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.963475ms)
Jan 23 13:10:17.034: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.617015ms)
Jan 23 13:10:17.046: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.654938ms)
Jan 23 13:10:17.057: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.097478ms)
Jan 23 13:10:17.065: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.679975ms)
Jan 23 13:10:17.071: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.262691ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:10:17.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-pmjlf" for this suite.
Jan 23 13:10:25.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:10:25.186: INFO: namespace: e2e-tests-proxy-pmjlf, resource: bindings, ignored listing per whitelist
Jan 23 13:10:25.307: INFO: namespace e2e-tests-proxy-pmjlf deletion completed in 8.229111221s

• [SLOW TEST:8.955 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:10:25.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 13:10:25.698: INFO: Number of nodes with available pods: 0
Jan 23 13:10:25.698: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:27.908: INFO: Number of nodes with available pods: 0
Jan 23 13:10:27.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:28.764: INFO: Number of nodes with available pods: 0
Jan 23 13:10:28.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:29.783: INFO: Number of nodes with available pods: 0
Jan 23 13:10:29.784: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:30.725: INFO: Number of nodes with available pods: 0
Jan 23 13:10:30.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:31.732: INFO: Number of nodes with available pods: 0
Jan 23 13:10:31.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:32.769: INFO: Number of nodes with available pods: 0
Jan 23 13:10:32.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:33.921: INFO: Number of nodes with available pods: 0
Jan 23 13:10:33.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:35.285: INFO: Number of nodes with available pods: 0
Jan 23 13:10:35.285: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:35.733: INFO: Number of nodes with available pods: 0
Jan 23 13:10:35.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:36.807: INFO: Number of nodes with available pods: 1
Jan 23 13:10:36.807: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 23 13:10:36.877: INFO: Number of nodes with available pods: 0
Jan 23 13:10:36.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:37.921: INFO: Number of nodes with available pods: 0
Jan 23 13:10:37.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:38.906: INFO: Number of nodes with available pods: 0
Jan 23 13:10:38.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:39.921: INFO: Number of nodes with available pods: 0
Jan 23 13:10:39.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:40.949: INFO: Number of nodes with available pods: 0
Jan 23 13:10:40.949: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:41.903: INFO: Number of nodes with available pods: 0
Jan 23 13:10:41.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:44.669: INFO: Number of nodes with available pods: 0
Jan 23 13:10:44.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:45.048: INFO: Number of nodes with available pods: 0
Jan 23 13:10:45.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:45.917: INFO: Number of nodes with available pods: 0
Jan 23 13:10:45.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:47.104: INFO: Number of nodes with available pods: 0
Jan 23 13:10:47.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:47.954: INFO: Number of nodes with available pods: 0
Jan 23 13:10:47.954: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:48.951: INFO: Number of nodes with available pods: 0
Jan 23 13:10:48.951: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:49.947: INFO: Number of nodes with available pods: 0
Jan 23 13:10:49.947: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:50.947: INFO: Number of nodes with available pods: 0
Jan 23 13:10:50.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:51.948: INFO: Number of nodes with available pods: 0
Jan 23 13:10:51.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:52.891: INFO: Number of nodes with available pods: 0
Jan 23 13:10:52.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:54.136: INFO: Number of nodes with available pods: 0
Jan 23 13:10:54.136: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:55.034: INFO: Number of nodes with available pods: 0
Jan 23 13:10:55.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:56.029: INFO: Number of nodes with available pods: 0
Jan 23 13:10:56.029: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:58.648: INFO: Number of nodes with available pods: 0
Jan 23 13:10:58.648: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:10:59.287: INFO: Number of nodes with available pods: 0
Jan 23 13:10:59.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:11:00.405: INFO: Number of nodes with available pods: 0
Jan 23 13:11:00.405: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:11:01.062: INFO: Number of nodes with available pods: 0
Jan 23 13:11:01.062: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:11:01.914: INFO: Number of nodes with available pods: 0
Jan 23 13:11:01.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 23 13:11:02.938: INFO: Number of nodes with available pods: 1
Jan 23 13:11:02.938: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b2v6l, will wait for the garbage collector to delete the pods
Jan 23 13:11:03.173: INFO: Deleting DaemonSet.extensions daemon-set took: 146.333612ms
Jan 23 13:11:03.273: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.347675ms
Jan 23 13:11:12.783: INFO: Number of nodes with available pods: 0
Jan 23 13:11:12.783: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 13:11:12.790: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b2v6l/daemonsets","resourceVersion":"19194182"},"items":null}

Jan 23 13:11:12.795: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b2v6l/pods","resourceVersion":"19194182"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:11:12.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-b2v6l" for this suite.
Jan 23 13:11:18.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:11:19.156: INFO: namespace: e2e-tests-daemonsets-b2v6l, resource: bindings, ignored listing per whitelist
Jan 23 13:11:19.207: INFO: namespace e2e-tests-daemonsets-b2v6l deletion completed in 6.391244347s

• [SLOW TEST:53.900 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:11:19.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 23 13:11:19.615: INFO: Waiting up to 5m0s for pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-emptydir-d6vdx" to be "success or failure"
Jan 23 13:11:19.643: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.251244ms
Jan 23 13:11:21.694: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078469185s
Jan 23 13:11:23.718: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103232845s
Jan 23 13:11:25.736: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120765027s
Jan 23 13:11:27.887: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271796541s
Jan 23 13:11:29.914: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298803223s
Jan 23 13:11:31.939: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.32353426s
STEP: Saw pod success
Jan 23 13:11:31.939: INFO: Pod "pod-d8596c88-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:11:31.944: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d8596c88-3de1-11ea-bb65-0242ac110005 container test-container: 
STEP: delete the pod
Jan 23 13:11:32.520: INFO: Waiting for pod pod-d8596c88-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:11:32.694: INFO: Pod pod-d8596c88-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:11:32.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d6vdx" for this suite.
Jan 23 13:11:38.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:11:38.951: INFO: namespace: e2e-tests-emptydir-d6vdx, resource: bindings, ignored listing per whitelist
Jan 23 13:11:38.951: INFO: namespace e2e-tests-emptydir-d6vdx deletion completed in 6.234879955s

• [SLOW TEST:19.744 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:11:38.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:11:39.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-ffqdf" to be "success or failure"
Jan 23 13:11:39.198: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377293ms
Jan 23 13:11:41.213: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020902967s
Jan 23 13:11:43.241: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049520563s
Jan 23 13:11:46.516: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.324232729s
Jan 23 13:11:48.806: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.614698027s
Jan 23 13:11:50.827: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.63495242s
Jan 23 13:11:52.848: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.656386519s
STEP: Saw pod success
Jan 23 13:11:52.848: INFO: Pod "downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:11:52.873: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005 container client-container: 
STEP: delete the pod
Jan 23 13:11:53.313: INFO: Waiting for pod downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005 to disappear
Jan 23 13:11:53.372: INFO: Pod downwardapi-volume-e414f1a1-3de1-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:11:53.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ffqdf" for this suite.
Jan 23 13:11:59.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:11:59.997: INFO: namespace: e2e-tests-downward-api-ffqdf, resource: bindings, ignored listing per whitelist
Jan 23 13:12:00.068: INFO: namespace e2e-tests-downward-api-ffqdf deletion completed in 6.683244278s

• [SLOW TEST:21.117 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:12:00.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 23 13:12:00.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:02.387: INFO: stderr: ""
Jan 23 13:12:02.387: INFO: stdout: "pod/pause created\n"
Jan 23 13:12:02.387: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 23 13:12:02.387: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-lctpd" to be "running and ready"
Jan 23 13:12:02.512: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 124.617254ms
Jan 23 13:12:04.630: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242857614s
Jan 23 13:12:06.672: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284552852s
Jan 23 13:12:08.703: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315878298s
Jan 23 13:12:10.875: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487157277s
Jan 23 13:12:12.900: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.512751895s
Jan 23 13:12:12.900: INFO: Pod "pause" satisfied condition "running and ready"
Jan 23 13:12:12.900: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 23 13:12:12.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.097: INFO: stderr: ""
Jan 23 13:12:13.097: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 23 13:12:13.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.220: INFO: stderr: ""
Jan 23 13:12:13.220: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 23 13:12:13.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.373: INFO: stderr: ""
Jan 23 13:12:13.373: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 23 13:12:13.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.503: INFO: stderr: ""
Jan 23 13:12:13.503: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 23 13:12:13.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 13:12:13.706: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 23 13:12:13.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-lctpd'
Jan 23 13:12:13.884: INFO: stderr: "No resources found.\n"
Jan 23 13:12:13.884: INFO: stdout: ""
Jan 23 13:12:13.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-lctpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 13:12:14.093: INFO: stderr: ""
Jan 23 13:12:14.093: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:12:14.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lctpd" for this suite.
Jan 23 13:12:20.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:12:20.398: INFO: namespace: e2e-tests-kubectl-lctpd, resource: bindings, ignored listing per whitelist
Jan 23 13:12:20.510: INFO: namespace e2e-tests-kubectl-lctpd deletion completed in 6.351163355s

• [SLOW TEST:20.441 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:12:20.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 23 13:12:31.693: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fcf545f6-3de1-11ea-bb65-0242ac110005"
Jan 23 13:12:31.693: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fcf545f6-3de1-11ea-bb65-0242ac110005" in namespace "e2e-tests-pods-5vfsd" to be "terminated due to deadline exceeded"
Jan 23 13:12:31.720: INFO: Pod "pod-update-activedeadlineseconds-fcf545f6-3de1-11ea-bb65-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 27.136326ms
Jan 23 13:12:33.729: INFO: Pod "pod-update-activedeadlineseconds-fcf545f6-3de1-11ea-bb65-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.036346188s
Jan 23 13:12:33.729: INFO: Pod "pod-update-activedeadlineseconds-fcf545f6-3de1-11ea-bb65-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:12:33.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5vfsd" for this suite.
Jan 23 13:12:40.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:12:41.037: INFO: namespace: e2e-tests-pods-5vfsd, resource: bindings, ignored listing per whitelist
Jan 23 13:12:41.068: INFO: namespace e2e-tests-pods-5vfsd deletion completed in 7.332976277s

• [SLOW TEST:20.558 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:12:41.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 23 13:12:41.223: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:13:04.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-mswgd" for this suite.
Jan 23 13:13:12.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:13:12.984: INFO: namespace: e2e-tests-init-container-mswgd, resource: bindings, ignored listing per whitelist
Jan 23 13:13:13.009: INFO: namespace e2e-tests-init-container-mswgd deletion completed in 8.25413589s

• [SLOW TEST:31.940 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:13:13.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:13:13.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-6xn2c" for this suite.
Jan 23 13:13:21.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:13:21.619: INFO: namespace: e2e-tests-services-6xn2c, resource: bindings, ignored listing per whitelist
Jan 23 13:13:21.718: INFO: namespace e2e-tests-services-6xn2c deletion completed in 8.233341534s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:8.709 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:13:21.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:13:21.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-68snj" to be "success or failure"
Jan 23 13:13:22.064: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.001153ms
Jan 23 13:13:24.369: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391067226s
Jan 23 13:13:26.404: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42592112s
Jan 23 13:13:28.414: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436343786s
Jan 23 13:13:30.779: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800929424s
Jan 23 13:13:32.806: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.827994549s
Jan 23 13:13:34.837: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.85914607s
Jan 23 13:13:36.860: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.882259158s
STEP: Saw pod success
Jan 23 13:13:36.860: INFO: Pod "downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:13:36.871: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005 container client-container: 
STEP: delete the pod
Jan 23 13:13:37.991: INFO: Waiting for pod downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005 to disappear
Jan 23 13:13:38.004: INFO: Pod downwardapi-volume-2151a219-3de2-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:13:38.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-68snj" for this suite.
Jan 23 13:13:46.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:13:46.134: INFO: namespace: e2e-tests-downward-api-68snj, resource: bindings, ignored listing per whitelist
Jan 23 13:13:46.267: INFO: namespace e2e-tests-downward-api-68snj deletion completed in 8.253818516s

• [SLOW TEST:24.549 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:13:46.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bb7qm
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 13:13:46.636: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 13:14:27.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-bb7qm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:14:27.232: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:14:27.381147       8 log.go:172] (0xc001b4a420) (0xc0018ba960) Create stream
I0123 13:14:27.381242       8 log.go:172] (0xc001b4a420) (0xc0018ba960) Stream added, broadcasting: 1
I0123 13:14:27.391748       8 log.go:172] (0xc001b4a420) Reply frame received for 1
I0123 13:14:27.391824       8 log.go:172] (0xc001b4a420) (0xc002242aa0) Create stream
I0123 13:14:27.391847       8 log.go:172] (0xc001b4a420) (0xc002242aa0) Stream added, broadcasting: 3
I0123 13:14:27.393970       8 log.go:172] (0xc001b4a420) Reply frame received for 3
I0123 13:14:27.394018       8 log.go:172] (0xc001b4a420) (0xc0018bab40) Create stream
I0123 13:14:27.394035       8 log.go:172] (0xc001b4a420) (0xc0018bab40) Stream added, broadcasting: 5
I0123 13:14:27.395920       8 log.go:172] (0xc001b4a420) Reply frame received for 5
I0123 13:14:27.611419       8 log.go:172] (0xc001b4a420) Data frame received for 3
I0123 13:14:27.611489       8 log.go:172] (0xc002242aa0) (3) Data frame handling
I0123 13:14:27.611525       8 log.go:172] (0xc002242aa0) (3) Data frame sent
I0123 13:14:27.785236       8 log.go:172] (0xc001b4a420) (0xc002242aa0) Stream removed, broadcasting: 3
I0123 13:14:27.785522       8 log.go:172] (0xc001b4a420) (0xc0018bab40) Stream removed, broadcasting: 5
I0123 13:14:27.785574       8 log.go:172] (0xc001b4a420) Data frame received for 1
I0123 13:14:27.785619       8 log.go:172] (0xc0018ba960) (1) Data frame handling
I0123 13:14:27.785666       8 log.go:172] (0xc0018ba960) (1) Data frame sent
I0123 13:14:27.785701       8 log.go:172] (0xc001b4a420) (0xc0018ba960) Stream removed, broadcasting: 1
I0123 13:14:27.785741       8 log.go:172] (0xc001b4a420) Go away received
I0123 13:14:27.786253       8 log.go:172] (0xc001b4a420) (0xc0018ba960) Stream removed, broadcasting: 1
I0123 13:14:27.786279       8 log.go:172] (0xc001b4a420) (0xc002242aa0) Stream removed, broadcasting: 3
I0123 13:14:27.786432       8 log.go:172] (0xc001b4a420) (0xc0018bab40) Stream removed, broadcasting: 5
Jan 23 13:14:27.786: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:14:27.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-bb7qm" for this suite.
Jan 23 13:14:43.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:14:44.019: INFO: namespace: e2e-tests-pod-network-test-bb7qm, resource: bindings, ignored listing per whitelist
Jan 23 13:14:44.126: INFO: namespace e2e-tests-pod-network-test-bb7qm deletion completed in 16.31550819s

• [SLOW TEST:57.858 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:14:44.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:14:44.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005" in namespace "e2e-tests-downward-api-5gknb" to be "success or failure"
Jan 23 13:14:44.426: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.847457ms
Jan 23 13:14:46.442: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037116218s
Jan 23 13:14:48.468: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063223238s
Jan 23 13:14:50.630: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224918014s
Jan 23 13:14:52.695: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290541632s
Jan 23 13:14:54.717: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311938016s
Jan 23 13:14:56.733: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.328325588s
STEP: Saw pod success
Jan 23 13:14:56.733: INFO: Pod "downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:14:56.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005 container client-container: 
STEP: delete the pod
Jan 23 13:14:56.929: INFO: Waiting for pod downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005 to disappear
Jan 23 13:14:56.943: INFO: Pod downwardapi-volume-527a84dd-3de2-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:14:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5gknb" for this suite.
Jan 23 13:15:02.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:15:03.154: INFO: namespace: e2e-tests-downward-api-5gknb, resource: bindings, ignored listing per whitelist
Jan 23 13:15:03.160: INFO: namespace e2e-tests-downward-api-5gknb deletion completed in 6.207933295s

• [SLOW TEST:19.033 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:15:03.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5dcb303a-3de2-11ea-bb65-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 23 13:15:03.512: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-9qz7g" to be "success or failure"
Jan 23 13:15:03.525: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.051266ms
Jan 23 13:15:05.550: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03830252s
Jan 23 13:15:07.627: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115158867s
Jan 23 13:15:09.835: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323245918s
Jan 23 13:15:12.389: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.876637946s
Jan 23 13:15:14.409: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.896893815s
STEP: Saw pod success
Jan 23 13:15:14.409: INFO: Pod "pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:15:14.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 13:15:14.645: INFO: Waiting for pod pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005 to disappear
Jan 23 13:15:14.660: INFO: Pod pod-projected-secrets-5dcbf3ed-3de2-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:15:14.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9qz7g" for this suite.
Jan 23 13:15:21.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:15:22.230: INFO: namespace: e2e-tests-projected-9qz7g, resource: bindings, ignored listing per whitelist
Jan 23 13:15:22.262: INFO: namespace e2e-tests-projected-9qz7g deletion completed in 7.597086924s

• [SLOW TEST:19.100 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 23 13:15:22.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-69327fc3-3de2-11ea-bb65-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 23 13:15:22.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005" in namespace "e2e-tests-projected-ttkh4" to be "success or failure"
Jan 23 13:15:22.644: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.441517ms
Jan 23 13:15:24.679: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107227608s
Jan 23 13:15:26.707: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135170594s
Jan 23 13:15:30.144: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.571530734s
Jan 23 13:15:32.176: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.603883747s
Jan 23 13:15:34.210: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.637892738s
Jan 23 13:15:37.051: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.478600205s
Jan 23 13:15:39.067: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.494682603s
STEP: Saw pod success
Jan 23 13:15:39.067: INFO: Pod "pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005" satisfied condition "success or failure"
Jan 23 13:15:39.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 13:15:41.176: INFO: Waiting for pod pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005 to disappear
Jan 23 13:15:41.244: INFO: Pod pod-projected-configmaps-6936dd62-3de2-11ea-bb65-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 23 13:15:41.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ttkh4" for this suite.
Jan 23 13:15:49.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:15:49.797: INFO: namespace: e2e-tests-projected-ttkh4, resource: bindings, ignored listing per whitelist
Jan 23 13:15:49.844: INFO: namespace e2e-tests-projected-ttkh4 deletion completed in 8.478260772s

• [SLOW TEST:27.582 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 23 13:15:49.846: INFO: Running AfterSuite actions on all nodes
Jan 23 13:15:49.846: INFO: Running AfterSuite actions on node 1
Jan 23 13:15:49.846: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8902.647 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS