I0827 13:47:37.159186 7 e2e.go:224] Starting e2e run "db339393-e86b-11ea-894e-0242ac11000d" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598536051 - Will randomize all specs Will run 201 of 2164 specs Aug 27 13:47:38.038: INFO: >>> kubeConfig: /root/.kube/config Aug 27 13:47:38.111: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 13:47:38.267: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 13:47:38.426: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 13:47:38.426: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 13:47:38.426: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 13:47:38.493: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 13:47:38.493: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 13:47:38.493: INFO: e2e test version: v1.13.12 Aug 27 13:47:38.497: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:47:38.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Aug 27 13:47:38.728: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-deeb5684-e86b-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 13:47:38.822: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-h5g7t" to be "success or failure" Aug 27 13:47:38.833: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.454711ms Aug 27 13:47:40.961: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138911323s Aug 27 13:47:42.968: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14566917s Aug 27 13:47:45.118: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 6.295739903s Aug 27 13:47:47.125: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.302439048s STEP: Saw pod success Aug 27 13:47:47.125: INFO: Pod "pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 13:47:47.129: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 13:47:47.210: INFO: Waiting for pod pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d to disappear Aug 27 13:47:47.808: INFO: Pod pod-projected-configmaps-deec5b56-e86b-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:47:47.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h5g7t" for this suite. Aug 27 13:47:58.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:47:58.737: INFO: namespace: e2e-tests-projected-h5g7t, resource: bindings, ignored listing per whitelist Aug 27 13:47:58.819: INFO: namespace e2e-tests-projected-h5g7t deletion completed in 10.648310762s • [SLOW TEST:20.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:47:58.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:48:00.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9dt9c" for this suite. Aug 27 13:48:06.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:48:07.026: INFO: namespace: e2e-tests-services-9dt9c, resource: bindings, ignored listing per whitelist Aug 27 13:48:07.083: INFO: namespace e2e-tests-services-9dt9c deletion completed in 6.56699538s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:8.260 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:48:07.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:49:00.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-5s8wr" for this suite. Aug 27 13:49:06.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:49:06.173: INFO: namespace: e2e-tests-container-runtime-5s8wr, resource: bindings, ignored listing per whitelist Aug 27 13:49:06.250: INFO: namespace e2e-tests-container-runtime-5s8wr deletion completed in 6.168243342s • [SLOW TEST:59.166 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:49:06.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-133b18cd-e86c-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 13:49:06.523: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-d8t2g" to be "success or failure" Aug 27 13:49:06.550: INFO: Pod "pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.518123ms Aug 27 13:49:08.590: INFO: Pod "pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066805888s Aug 27 13:49:10.658: INFO: Pod "pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134969615s STEP: Saw pod success Aug 27 13:49:10.659: INFO: Pod "pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 13:49:10.664: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 13:49:10.892: INFO: Waiting for pod pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d to disappear Aug 27 13:49:11.082: INFO: Pod pod-projected-configmaps-133bcdf3-e86c-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:49:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d8t2g" for this suite. Aug 27 13:49:17.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:49:17.406: INFO: namespace: e2e-tests-projected-d8t2g, resource: bindings, ignored listing per whitelist Aug 27 13:49:17.443: INFO: namespace e2e-tests-projected-d8t2g deletion completed in 6.349821411s • [SLOW TEST:11.191 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:49:17.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-whfzt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-whfzt to expose endpoints map[] Aug 27 13:49:18.031: INFO: Get endpoints failed (4.75693ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 27 13:49:19.200: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-whfzt exposes endpoints map[] (1.173125357s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-whfzt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-whfzt to expose endpoints map[pod1:[100]] Aug 27 13:49:24.337: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.121675708s elapsed, will retry) Aug 27 13:49:25.360: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-whfzt exposes endpoints map[pod1:[100]] (6.144027598s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-whfzt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-whfzt to expose endpoints map[pod1:[100] pod2:[101]] Aug 27 13:49:29.753: INFO: Unexpected endpoints: found map[1ace9520-e86c-11ea-a485-0242ac120004:[100]], expected map[pod1:[100] pod2:[101]] (4.387032275s elapsed, will retry) Aug 27 13:49:31.937: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-whfzt exposes endpoints map[pod1:[100] pod2:[101]] (6.570781266s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-whfzt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-whfzt to expose endpoints map[pod2:[101]] Aug 27 13:49:31.956: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-whfzt exposes endpoints map[pod2:[101]] (12.609743ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-whfzt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-whfzt to expose endpoints map[] Aug 27 13:49:33.035: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-whfzt exposes endpoints map[] (1.074114085s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:49:33.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-whfzt" for this suite. Aug 27 13:49:41.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:49:41.580: INFO: namespace: e2e-tests-services-whfzt, resource: bindings, ignored listing per whitelist Aug 27 13:49:41.679: INFO: namespace e2e-tests-services-whfzt deletion completed in 8.331445566s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:24.235 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:49:41.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 27 13:49:49.009: INFO: Successfully updated pod "annotationupdate289293b7-e86c-11ea-894e-0242ac11000d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:49:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wv4v8" for this suite. Aug 27 13:50:21.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:50:21.582: INFO: namespace: e2e-tests-downward-api-wv4v8, resource: bindings, ignored listing per whitelist Aug 27 13:50:21.593: INFO: namespace e2e-tests-downward-api-wv4v8 deletion completed in 29.229313979s • [SLOW TEST:39.913 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:50:21.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-411e06f2-e86c-11ea-894e-0242ac11000d STEP: Creating secret with name secret-projected-all-test-volume-411e0541-e86c-11ea-894e-0242ac11000d STEP: Creating a pod to test Check all projections for projected volume plugin Aug 27 13:50:23.684: INFO: Waiting up to 5m0s for pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-nzzw9" to be "success or failure" Aug 27 13:50:23.704: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.025621ms Aug 27 13:50:25.710: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026311782s Aug 27 13:50:27.715: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031309899s Aug 27 13:50:29.753: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069282972s Aug 27 13:50:31.978: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293967967s Aug 27 13:50:34.096: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.41200131s STEP: Saw pod success Aug 27 13:50:34.096: INFO: Pod "projected-volume-411e017e-e86c-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 13:50:34.391: INFO: Trying to get logs from node hunter-worker pod projected-volume-411e017e-e86c-11ea-894e-0242ac11000d container projected-all-volume-test: STEP: delete the pod Aug 27 13:50:35.700: INFO: Waiting for pod projected-volume-411e017e-e86c-11ea-894e-0242ac11000d to disappear Aug 27 13:50:36.071: INFO: Pod projected-volume-411e017e-e86c-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:50:36.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nzzw9" for this suite. Aug 27 13:50:44.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:50:45.590: INFO: namespace: e2e-tests-projected-nzzw9, resource: bindings, ignored listing per whitelist Aug 27 13:50:45.662: INFO: namespace e2e-tests-projected-nzzw9 deletion completed in 9.568494123s • [SLOW TEST:24.068 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:50:45.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Aug 27 13:50:46.515: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix854397036/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:50:47.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hvhcp" for this suite. Aug 27 13:50:53.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:50:53.870: INFO: namespace: e2e-tests-kubectl-hvhcp, resource: bindings, ignored listing per whitelist Aug 27 13:50:54.609: INFO: namespace e2e-tests-kubectl-hvhcp deletion completed in 6.992953497s • [SLOW TEST:8.946 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:50:54.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tp8xt Aug 27 13:51:04.708: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tp8xt STEP: checking the pod's current state and verifying that restartCount is present Aug 27 13:51:04.714: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:55:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-tp8xt" for this suite. Aug 27 13:55:15.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:55:15.966: INFO: namespace: e2e-tests-container-probe-tp8xt, resource: bindings, ignored listing per whitelist Aug 27 13:55:16.211: INFO: namespace e2e-tests-container-probe-tp8xt deletion completed in 9.316947246s • [SLOW TEST:261.601 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:55:16.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 13:55:17.886: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f0601ee9-e86c-11ea-a485-0242ac120004", Controller:(*bool)(0x4001b4804a), BlockOwnerDeletion:(*bool)(0x4001b4804b)}} Aug 27 13:55:18.198: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f0440c5a-e86c-11ea-a485-0242ac120004", Controller:(*bool)(0x4001ec0662), BlockOwnerDeletion:(*bool)(0x4001ec0663)}} Aug 27 13:55:18.208: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f0453c36-e86c-11ea-a485-0242ac120004", Controller:(*bool)(0x4001742bba), BlockOwnerDeletion:(*bool)(0x4001742bbb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:55:28.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mhbwp" for this suite. Aug 27 13:55:37.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:55:37.717: INFO: namespace: e2e-tests-gc-mhbwp, resource: bindings, ignored listing per whitelist Aug 27 13:55:37.814: INFO: namespace e2e-tests-gc-mhbwp deletion completed in 9.079916108s • [SLOW TEST:21.602 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:55:37.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 13:55:37.972: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 27 13:55:37.993: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 27 13:55:42.999: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 27 13:55:43.000: INFO: Creating deployment "test-rolling-update-deployment" Aug 27 13:55:43.013: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 27 13:55:43.029: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 27 13:55:45.053: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 27 13:55:45.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 27 13:55:47.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 27 13:55:49.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133343, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 27 13:55:51.066: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 27 13:55:51.106: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-5v7ks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5v7ks/deployments/test-rolling-update-deployment,UID:ff916d44-e86c-11ea-a485-0242ac120004,ResourceVersion:2641025,Generation:1,CreationTimestamp:2020-08-27 13:55:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-27 13:55:43 +0000 UTC 2020-08-27 13:55:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-27 13:55:50 +0000 UTC 2020-08-27 13:55:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 27 13:55:51.118: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-5v7ks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5v7ks/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ff951106-e86c-11ea-a485-0242ac120004,ResourceVersion:2641015,Generation:1,CreationTimestamp:2020-08-27 13:55:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ff916d44-e86c-11ea-a485-0242ac120004 0x40014d1117 0x40014d1118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 27 13:55:51.119: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 27 13:55:51.120: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-5v7ks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5v7ks/replicasets/test-rolling-update-controller,UID:fc924cc1-e86c-11ea-a485-0242ac120004,ResourceVersion:2641024,Generation:2,CreationTimestamp:2020-08-27 13:55:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ff916d44-e86c-11ea-a485-0242ac120004 0x40014d102f 0x40014d1040}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 27 13:55:51.135: INFO: Pod "test-rolling-update-deployment-75db98fb4c-4ntjc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-4ntjc,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-5v7ks,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5v7ks/pods/test-rolling-update-deployment-75db98fb4c-4ntjc,UID:ff99571d-e86c-11ea-a485-0242ac120004,ResourceVersion:2641014,Generation:0,CreationTimestamp:2020-08-27 13:55:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ff951106-e86c-11ea-a485-0242ac120004 0x40014d19e7 0x40014d19e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jxkbl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jxkbl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jxkbl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40014d1a60} {node.kubernetes.io/unreachable Exists NoExecute 0x40014d1a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 13:55:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 13:55:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 13:55:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 13:55:43 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.133,StartTime:2020-08-27 13:55:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-27 13:55:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8180941c5526bc0f189454111ee23cf0daeb64fd8ee3063605ce548a05a8a63c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:55:51.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5v7ks" for this suite. Aug 27 13:55:57.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:55:57.275: INFO: namespace: e2e-tests-deployment-5v7ks, resource: bindings, ignored listing per whitelist Aug 27 13:55:57.381: INFO: namespace e2e-tests-deployment-5v7ks deletion completed in 6.238250036s • [SLOW TEST:19.567 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:55:57.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 27 13:55:57.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-v799h' Aug 27 13:56:01.463: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 27 13:56:01.463: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 27 13:56:01.834: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qp4vw] Aug 27 13:56:01.835: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qp4vw" in namespace "e2e-tests-kubectl-v799h" to be "running and ready" Aug 27 13:56:01.873: INFO: Pod "e2e-test-nginx-rc-qp4vw": Phase="Pending", Reason="", readiness=false. Elapsed: 37.204127ms Aug 27 13:56:03.881: INFO: Pod "e2e-test-nginx-rc-qp4vw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044952045s Aug 27 13:56:06.175: INFO: Pod "e2e-test-nginx-rc-qp4vw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339527265s Aug 27 13:56:08.182: INFO: Pod "e2e-test-nginx-rc-qp4vw": Phase="Running", Reason="", readiness=true. Elapsed: 6.345953549s Aug 27 13:56:08.182: INFO: Pod "e2e-test-nginx-rc-qp4vw" satisfied condition "running and ready" Aug 27 13:56:08.182: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qp4vw] Aug 27 13:56:08.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v799h' Aug 27 13:56:09.557: INFO: stderr: "" Aug 27 13:56:09.557: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Aug 27 13:56:09.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v799h' Aug 27 13:56:10.808: INFO: stderr: "" Aug 27 13:56:10.808: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:56:10.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v799h" for this suite. Aug 27 13:56:34.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:56:35.130: INFO: namespace: e2e-tests-kubectl-v799h, resource: bindings, ignored listing per whitelist Aug 27 13:56:35.147: INFO: namespace e2e-tests-kubectl-v799h deletion completed in 24.25197311s • [SLOW TEST:37.760 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:56:35.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 27 13:56:35.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-krgzs' Aug 27 13:56:37.675: INFO: stderr: "" Aug 27 13:56:37.675: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 27 13:56:38.868: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:38.869: INFO: Found 0 / 1 Aug 27 13:56:39.862: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:39.863: INFO: Found 0 / 1 Aug 27 13:56:40.704: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:40.704: INFO: Found 0 / 1 Aug 27 13:56:41.957: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:41.958: INFO: Found 0 / 1 Aug 27 13:56:42.711: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:42.712: INFO: Found 0 / 1 Aug 27 13:56:43.880: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:43.881: INFO: Found 0 / 1 Aug 27 13:56:44.682: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:44.683: INFO: Found 0 / 1 Aug 27 13:56:45.716: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:45.716: INFO: Found 0 / 1 Aug 27 13:56:46.683: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:46.684: INFO: Found 1 / 1 Aug 27 13:56:46.684: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 27 13:56:46.689: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:46.690: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 27 13:56:46.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-64rnj --namespace=e2e-tests-kubectl-krgzs -p {"metadata":{"annotations":{"x":"y"}}}' Aug 27 13:56:48.574: INFO: stderr: "" Aug 27 13:56:48.575: INFO: stdout: "pod/redis-master-64rnj patched\n" STEP: checking annotations Aug 27 13:56:48.627: INFO: Selector matched 1 pods for map[app:redis] Aug 27 13:56:48.627: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:56:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-krgzs" for this suite. Aug 27 13:57:13.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:57:13.081: INFO: namespace: e2e-tests-kubectl-krgzs, resource: bindings, ignored listing per whitelist Aug 27 13:57:13.180: INFO: namespace e2e-tests-kubectl-krgzs deletion completed in 24.232584932s • [SLOW TEST:38.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:57:13.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 13:57:19.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-whggk" for this suite. Aug 27 13:58:09.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 13:58:09.688: INFO: namespace: e2e-tests-kubelet-test-whggk, resource: bindings, ignored listing per whitelist Aug 27 13:58:09.757: INFO: namespace e2e-tests-kubelet-test-whggk deletion completed in 50.160033455s • [SLOW TEST:56.575 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 13:58:09.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-h59ld [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Aug 27 13:58:10.112: INFO: Found 0 stateful pods, waiting for 3 Aug 27 13:58:20.123: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 13:58:20.123: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 13:58:20.123: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 27 13:58:30.161: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 13:58:30.161: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 13:58:30.162: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 27 13:58:30.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-h59ld ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 13:58:32.086: INFO: stderr: "I0827 13:58:31.489491 189 log.go:172] (0x4000160840) (0x40007b2640) Create stream\nI0827 13:58:31.496961 189 log.go:172] (0x4000160840) (0x40007b2640) Stream added, broadcasting: 1\nI0827 13:58:31.514012 189 log.go:172] (0x4000160840) Reply frame received for 1\nI0827 13:58:31.514508 189 log.go:172] (0x4000160840) (0x40007b26e0) Create stream\nI0827 13:58:31.514571 189 log.go:172] (0x4000160840) (0x40007b26e0) Stream added, broadcasting: 3\nI0827 13:58:31.516286 189 log.go:172] (0x4000160840) Reply frame received for 3\nI0827 13:58:31.516619 189 log.go:172] (0x4000160840) (0x4000692c80) Create stream\nI0827 13:58:31.516720 189 log.go:172] (0x4000160840) (0x4000692c80) Stream added, broadcasting: 5\nI0827 13:58:31.518105 189 log.go:172] (0x4000160840) Reply frame received for 5\nI0827 13:58:32.063433 189 log.go:172] (0x4000160840) Data frame received for 5\nI0827 13:58:32.063682 189 log.go:172] (0x4000160840) Data frame received for 3\nI0827 13:58:32.063886 189 log.go:172] (0x40007b26e0) (3) Data frame handling\nI0827 13:58:32.064068 189 log.go:172] (0x4000692c80) (5) Data frame handling\nI0827 13:58:32.064409 189 log.go:172] (0x4000160840) Data frame received for 1\nI0827 13:58:32.064566 189 log.go:172] (0x40007b2640) (1) Data frame handling\nI0827 13:58:32.065735 189 log.go:172] (0x40007b2640) (1) Data frame sent\nI0827 13:58:32.065914 189 log.go:172] (0x40007b26e0) (3) Data frame sent\nI0827 13:58:32.065992 189 log.go:172] (0x4000160840) Data frame received for 3\nI0827 13:58:32.066045 189 log.go:172] (0x40007b26e0) (3) Data frame handling\nI0827 13:58:32.067394 189 log.go:172] (0x4000160840) (0x40007b2640) Stream removed, broadcasting: 1\nI0827 13:58:32.070130 189 log.go:172] (0x4000160840) Go away received\nI0827 13:58:32.071128 189 log.go:172] (0x4000160840) (0x40007b2640) Stream removed, broadcasting: 1\nI0827 13:58:32.073705 189 log.go:172] (0x4000160840) (0x40007b26e0) Stream removed, broadcasting: 3\nI0827 13:58:32.074076 189 log.go:172] (0x4000160840) (0x4000692c80) Stream removed, broadcasting: 5\n" Aug 27 13:58:32.087: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 13:58:32.087: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 27 13:58:42.155: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 27 13:58:54.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-h59ld ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 13:58:55.937: INFO: stderr: "I0827 13:58:55.836589 213 log.go:172] (0x4000728370) (0x40006714a0) Create stream\nI0827 13:58:55.839785 213 log.go:172] (0x4000728370) (0x40006714a0) Stream added, broadcasting: 1\nI0827 13:58:55.850526 213 log.go:172] (0x4000728370) Reply frame received for 1\nI0827 13:58:55.851249 213 log.go:172] (0x4000728370) (0x4000671540) Create stream\nI0827 13:58:55.851328 213 log.go:172] (0x4000728370) (0x4000671540) Stream added, broadcasting: 3\nI0827 13:58:55.852690 213 log.go:172] (0x4000728370) Reply frame received for 3\nI0827 13:58:55.852976 213 log.go:172] (0x4000728370) (0x40006bc000) Create stream\nI0827 13:58:55.853043 213 log.go:172] (0x4000728370) (0x40006bc000) Stream added, broadcasting: 5\nI0827 13:58:55.854396 213 log.go:172] (0x4000728370) Reply frame received for 5\nI0827 13:58:55.918340 213 log.go:172] (0x4000728370) Data frame received for 3\nI0827 13:58:55.918814 213 log.go:172] (0x4000671540) (3) Data frame handling\nI0827 13:58:55.919729 213 log.go:172] (0x4000728370) Data frame received for 5\nI0827 13:58:55.921092 213 log.go:172] (0x40006bc000) (5) Data frame handling\nI0827 13:58:55.921217 213 log.go:172] (0x4000728370) Data frame received for 1\nI0827 13:58:55.921357 213 log.go:172] (0x40006714a0) (1) Data frame handling\nI0827 13:58:55.921511 213 log.go:172] (0x40006714a0) (1) Data frame sent\nI0827 13:58:55.923229 213 log.go:172] (0x4000671540) (3) Data frame sent\nI0827 13:58:55.923299 213 log.go:172] (0x4000728370) Data frame received for 3\nI0827 13:58:55.923356 213 log.go:172] (0x4000671540) (3) Data frame handling\nI0827 13:58:55.925125 213 log.go:172] (0x4000728370) (0x40006714a0) Stream removed, broadcasting: 1\nI0827 13:58:55.925718 213 log.go:172] (0x4000728370) Go away received\nI0827 13:58:55.927918 213 log.go:172] (0x4000728370) (0x40006714a0) Stream removed, broadcasting: 1\nI0827 13:58:55.928128 213 log.go:172] (0x4000728370) (0x4000671540) Stream removed, broadcasting: 3\nI0827 13:58:55.928298 213 log.go:172] (0x4000728370) (0x40006bc000) Stream removed, broadcasting: 5\n" Aug 27 13:58:55.938: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 13:58:55.938: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 13:59:08.714: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 13:59:08.714: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:08.714: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:08.714: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:19.223: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 13:59:19.223: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:19.223: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:28.735: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 13:59:28.735: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 13:59:38.727: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update STEP: Rolling back to a previous revision Aug 27 13:59:49.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-h59ld ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 13:59:51.810: INFO: stderr: "I0827 13:59:50.940008 237 log.go:172] (0x4000144840) (0x400065f540) Create stream\nI0827 13:59:50.942660 237 log.go:172] (0x4000144840) (0x400065f540) Stream added, broadcasting: 1\nI0827 13:59:50.953961 237 log.go:172] (0x4000144840) Reply frame received for 1\nI0827 13:59:50.954455 237 log.go:172] (0x4000144840) (0x400065f5e0) Create stream\nI0827 13:59:50.954514 237 log.go:172] (0x4000144840) (0x400065f5e0) Stream added, broadcasting: 3\nI0827 13:59:50.955936 237 log.go:172] (0x4000144840) Reply frame received for 3\nI0827 13:59:50.956219 237 log.go:172] (0x4000144840) (0x400063e000) Create stream\nI0827 13:59:50.956312 237 log.go:172] (0x4000144840) (0x400063e000) Stream added, broadcasting: 5\nI0827 13:59:50.957689 237 log.go:172] (0x4000144840) Reply frame received for 5\nI0827 13:59:51.793781 237 log.go:172] (0x4000144840) Data frame received for 3\nI0827 13:59:51.794154 237 log.go:172] (0x4000144840) Data frame received for 1\nI0827 13:59:51.794400 237 log.go:172] (0x400065f540) (1) Data frame handling\nI0827 13:59:51.794526 237 log.go:172] (0x4000144840) Data frame received for 5\nI0827 13:59:51.794608 237 log.go:172] (0x400063e000) (5) Data frame handling\nI0827 13:59:51.795095 237 log.go:172] (0x400065f5e0) (3) Data frame handling\nI0827 13:59:51.796247 237 log.go:172] (0x400065f5e0) (3) Data frame sent\nI0827 13:59:51.796320 237 log.go:172] (0x4000144840) Data frame received for 3\nI0827 13:59:51.796362 237 log.go:172] (0x400065f540) (1) Data frame sent\nI0827 13:59:51.796484 237 log.go:172] (0x400065f5e0) (3) Data frame handling\nI0827 13:59:51.797134 237 log.go:172] (0x4000144840) (0x400065f540) Stream removed, broadcasting: 1\nI0827 13:59:51.798710 237 log.go:172] (0x4000144840) Go away received\nI0827 13:59:51.801155 237 log.go:172] (0x4000144840) (0x400065f540) Stream removed, broadcasting: 1\nI0827 13:59:51.801333 237 log.go:172] (0x4000144840) (0x400065f5e0) Stream removed, broadcasting: 3\nI0827 13:59:51.801475 237 log.go:172] (0x4000144840) (0x400063e000) Stream removed, broadcasting: 5\n" Aug 27 13:59:51.810: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 13:59:51.810: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:00:01.866: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 27 14:00:11.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-h59ld ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:00:13.369: INFO: stderr: "I0827 14:00:13.257769 261 log.go:172] (0x4000168840) (0x400079c640) Create stream\nI0827 14:00:13.260151 261 log.go:172] (0x4000168840) (0x400079c640) Stream added, broadcasting: 1\nI0827 14:00:13.270051 261 log.go:172] (0x4000168840) Reply frame received for 1\nI0827 14:00:13.270703 261 log.go:172] (0x4000168840) (0x400079c6e0) Create stream\nI0827 14:00:13.270784 261 log.go:172] (0x4000168840) (0x400079c6e0) Stream added, broadcasting: 3\nI0827 14:00:13.273191 261 log.go:172] (0x4000168840) Reply frame received for 3\nI0827 14:00:13.273728 261 log.go:172] (0x4000168840) (0x400065edc0) Create stream\nI0827 14:00:13.273873 261 log.go:172] (0x4000168840) (0x400065edc0) Stream added, broadcasting: 5\nI0827 14:00:13.275989 261 log.go:172] (0x4000168840) Reply frame received for 5\nI0827 14:00:13.342516 261 log.go:172] (0x4000168840) Data frame received for 3\nI0827 14:00:13.342853 261 log.go:172] (0x4000168840) Data frame received for 5\nI0827 14:00:13.343018 261 log.go:172] (0x400065edc0) (5) Data frame handling\nI0827 14:00:13.343149 261 log.go:172] (0x4000168840) Data frame received for 1\nI0827 14:00:13.343285 261 log.go:172] (0x400079c640) (1) Data frame handling\nI0827 14:00:13.343427 261 log.go:172] (0x400079c6e0) (3) Data frame handling\nI0827 14:00:13.345319 261 log.go:172] (0x400079c640) (1) Data frame sent\nI0827 14:00:13.345515 261 log.go:172] (0x400079c6e0) (3) Data frame sent\nI0827 14:00:13.346183 261 log.go:172] (0x4000168840) Data frame received for 3\nI0827 14:00:13.346308 261 log.go:172] (0x400079c6e0) (3) Data frame handling\nI0827 14:00:13.349352 261 log.go:172] (0x4000168840) (0x400079c640) Stream removed, broadcasting: 1\nI0827 14:00:13.350318 261 log.go:172] (0x4000168840) Go away received\nI0827 14:00:13.353018 261 log.go:172] (0x4000168840) (0x400079c640) Stream removed, broadcasting: 1\nI0827 14:00:13.353588 261 log.go:172] (0x4000168840) (0x400079c6e0) Stream removed, broadcasting: 3\nI0827 14:00:13.353855 261 log.go:172] (0x4000168840) (0x400065edc0) Stream removed, broadcasting: 5\n" Aug 27 14:00:13.370: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:00:13.370: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:00:23.567: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 14:00:23.567: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 27 14:00:23.568: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 27 14:00:33.580: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 14:00:33.580: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 27 14:00:43.581: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update Aug 27 14:00:43.581: INFO: Waiting for Pod e2e-tests-statefulset-h59ld/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 27 14:00:53.621: INFO: Waiting for StatefulSet e2e-tests-statefulset-h59ld/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 27 14:01:03.581: INFO: Deleting all statefulset in ns e2e-tests-statefulset-h59ld Aug 27 14:01:03.587: INFO: Scaling statefulset ss2 to 0 Aug 27 14:01:44.429: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:01:44.435: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:01:44.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-h59ld" for this suite. Aug 27 14:02:00.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:02:00.959: INFO: namespace: e2e-tests-statefulset-h59ld, resource: bindings, ignored listing per whitelist Aug 27 14:02:01.059: INFO: namespace e2e-tests-statefulset-h59ld deletion completed in 16.40106538s • [SLOW TEST:231.301 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:02:01.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:02:02.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-5v58n" to be "success or failure" Aug 27 14:02:02.205: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 187.349881ms Aug 27 14:02:04.445: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427933574s Aug 27 14:02:07.698: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.68104034s Aug 27 14:02:09.762: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.744271329s Aug 27 14:02:11.803: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 9.785694723s Aug 27 14:02:13.912: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.894412153s STEP: Saw pod success Aug 27 14:02:13.912: INFO: Pod "downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:02:13.971: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:02:14.127: INFO: Waiting for pod downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d to disappear Aug 27 14:02:14.184: INFO: Pod downwardapi-volume-e1481a47-e86d-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:02:14.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5v58n" for this suite. Aug 27 14:02:20.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:02:20.413: INFO: namespace: e2e-tests-downward-api-5v58n, resource: bindings, ignored listing per whitelist Aug 27 14:02:20.481: INFO: namespace e2e-tests-downward-api-5v58n deletion completed in 6.136097324s • [SLOW TEST:19.420 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:02:20.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Aug 27 14:02:21.247: INFO: Waiting up to 5m0s for pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2" in namespace "e2e-tests-svcaccounts-w9svg" to be "success or failure" Aug 27 14:02:21.275: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.975609ms Aug 27 14:02:23.281: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034149649s Aug 27 14:02:25.288: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040993541s Aug 27 14:02:27.294: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047087036s Aug 27 14:02:29.299: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052071535s Aug 27 14:02:31.366: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118972989s STEP: Saw pod success Aug 27 14:02:31.366: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2" satisfied condition "success or failure" Aug 27 14:02:31.425: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2 container token-test: STEP: delete the pod Aug 27 14:02:31.638: INFO: Waiting for pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2 to disappear Aug 27 14:02:31.784: INFO: Pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-c9rw2 no longer exists STEP: Creating a pod to test consume service account root CA Aug 27 14:02:31.791: INFO: Waiting up to 5m0s for pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx" in namespace "e2e-tests-svcaccounts-w9svg" to be "success or failure" Aug 27 14:02:31.802: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 11.433525ms Aug 27 14:02:33.857: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066332075s Aug 27 14:02:35.865: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073560557s Aug 27 14:02:37.924: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133093383s Aug 27 14:02:40.133: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34161885s Aug 27 14:02:42.152: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361016085s Aug 27 14:02:44.158: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.366871013s Aug 27 14:02:46.530: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.738957604s STEP: Saw pod success Aug 27 14:02:46.530: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx" satisfied condition "success or failure" Aug 27 14:02:47.458: INFO: Trying to get logs from node hunter-worker pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx container root-ca-test: STEP: delete the pod Aug 27 14:02:47.968: INFO: Waiting for pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx to disappear Aug 27 14:02:48.282: INFO: Pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-cgzpx no longer exists STEP: Creating a pod to test consume service account namespace Aug 27 14:02:48.617: INFO: Waiting up to 5m0s for pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl" in namespace "e2e-tests-svcaccounts-w9svg" to be "success or failure" Aug 27 14:02:48.648: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 30.351625ms Aug 27 14:02:50.655: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037756463s Aug 27 14:02:52.690: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072204947s Aug 27 14:02:54.810: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192545588s Aug 27 14:02:56.817: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199188827s Aug 27 14:02:59.170: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.552700117s Aug 27 14:03:01.554: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.936170641s Aug 27 14:03:04.474: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.856694894s Aug 27 14:03:07.098: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.480430159s STEP: Saw pod success Aug 27 14:03:07.098: INFO: Pod "pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl" satisfied condition "success or failure" Aug 27 14:03:07.157: INFO: Trying to get logs from node hunter-worker pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl container namespace-test: STEP: delete the pod Aug 27 14:03:07.901: INFO: Waiting for pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl to disappear Aug 27 14:03:08.109: INFO: Pod pod-service-account-ecef3384-e86d-11ea-894e-0242ac11000d-qc8hl no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:03:08.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-w9svg" for this suite. Aug 27 14:03:18.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:03:18.680: INFO: namespace: e2e-tests-svcaccounts-w9svg, resource: bindings, ignored listing per whitelist Aug 27 14:03:18.719: INFO: namespace e2e-tests-svcaccounts-w9svg deletion completed in 10.551726031s • [SLOW TEST:58.238 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:03:18.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 27 14:03:18.963: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:03:37.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-6krfb" for this suite. Aug 27 14:03:49.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:03:49.646: INFO: namespace: e2e-tests-init-container-6krfb, resource: bindings, ignored listing per whitelist Aug 27 14:03:49.664: INFO: namespace e2e-tests-init-container-6krfb deletion completed in 12.538008321s • [SLOW TEST:30.944 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:03:49.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 27 14:03:57.807: INFO: 9 pods remaining Aug 27 14:03:57.807: INFO: 0 pods has nil DeletionTimestamp Aug 27 14:03:57.807: INFO: Aug 27 14:03:58.648: INFO: 0 pods remaining Aug 27 14:03:58.648: INFO: 0 pods has nil DeletionTimestamp Aug 27 14:03:58.648: INFO: STEP: Gathering metrics W0827 14:04:00.892204 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 14:04:00.893: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:04:00.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mzs8n" for this suite. Aug 27 14:04:12.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:04:12.781: INFO: namespace: e2e-tests-gc-mzs8n, resource: bindings, ignored listing per whitelist Aug 27 14:04:12.860: INFO: namespace e2e-tests-gc-mzs8n deletion completed in 11.178275782s • [SLOW TEST:23.196 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:04:12.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:04:13.415: INFO: Creating deployment "nginx-deployment" Aug 27 14:04:13.510: INFO: Waiting for observed generation 1 Aug 27 14:04:16.230: INFO: Waiting for all required pods to come up Aug 27 14:04:16.582: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 27 14:04:36.304: INFO: Waiting for deployment "nginx-deployment" to complete Aug 27 14:04:36.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133875, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133875, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133876, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734133853, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-deployment-85ddf47c5d\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 27 14:04:39.087: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 27 14:04:39.101: INFO: Updating deployment nginx-deployment Aug 27 14:04:39.102: INFO: Waiting for observed generation 2 Aug 27 14:04:42.215: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 27 14:04:46.459: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 27 14:04:46.849: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 27 14:04:48.266: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 27 14:04:48.266: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 27 14:04:48.305: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 27 14:04:48.314: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 27 14:04:48.314: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 27 14:04:48.322: INFO: Updating deployment nginx-deployment Aug 27 14:04:48.322: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 27 14:04:48.509: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 27 14:04:50.838: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 27 14:04:51.192: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-st978,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-st978/deployments/nginx-deployment,UID:2fcc1b88-e86e-11ea-a485-0242ac120004,ResourceVersion:2643533,Generation:3,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-27 14:04:48 +0000 UTC 2020-08-27 14:04:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-27 14:04:49 +0000 UTC 2020-08-27 14:04:13 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 27 14:04:51.293: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-st978,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-st978/replicasets/nginx-deployment-5c98f8fb5,UID:3f1b6990-e86e-11ea-a485-0242ac120004,ResourceVersion:2643532,Generation:3,CreationTimestamp:2020-08-27 14:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2fcc1b88-e86e-11ea-a485-0242ac120004 0x400187a367 0x400187a368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 27 14:04:51.294: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 27 14:04:51.295: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-st978,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-st978/replicasets/nginx-deployment-85ddf47c5d,UID:2fdc8b2f-e86e-11ea-a485-0242ac120004,ResourceVersion:2643529,Generation:3,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2fcc1b88-e86e-11ea-a485-0242ac120004 0x400187a4a7 0x400187a4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 27 14:04:51.634: INFO: Pod "nginx-deployment-5c98f8fb5-2hz4l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2hz4l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-2hz4l,UID:4064b960-e86e-11ea-a485-0242ac120004,ResourceVersion:2643420,Generation:0,CreationTimestamp:2020-08-27 14:04:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b237 0x400187b238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b2c0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:41 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.635: INFO: Pod "nginx-deployment-5c98f8fb5-52drt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-52drt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-52drt,UID:44e0c9a6-e86e-11ea-a485-0242ac120004,ResourceVersion:2643525,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b410 0x400187b411}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b4a0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.636: INFO: Pod "nginx-deployment-5c98f8fb5-6j7wb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6j7wb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-6j7wb,UID:44ea7a7f-e86e-11ea-a485-0242ac120004,ResourceVersion:2643546,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b580 0x400187b581}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b600} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.637: INFO: Pod "nginx-deployment-5c98f8fb5-8wfll" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8wfll,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-8wfll,UID:44f399cf-e86e-11ea-a485-0242ac120004,ResourceVersion:2643510,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b6e0 0x400187b6e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b760} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.639: INFO: Pod "nginx-deployment-5c98f8fb5-9bhxk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9bhxk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-9bhxk,UID:4071dbec-e86e-11ea-a485-0242ac120004,ResourceVersion:2643436,Generation:0,CreationTimestamp:2020-08-27 14:04:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b7f7 0x400187b7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b870} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:42 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 14:04:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.640: INFO: Pod "nginx-deployment-5c98f8fb5-9dnlh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9dnlh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-9dnlh,UID:44eb2145-e86e-11ea-a485-0242ac120004,ResourceVersion:2643496,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187b950 0x400187b951}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187b9d0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187b9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.640: INFO: Pod "nginx-deployment-5c98f8fb5-ddqq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ddqq6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-ddqq6,UID:44ea7796-e86e-11ea-a485-0242ac120004,ResourceVersion:2643499,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187ba67 0x400187ba68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187bae0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187bb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.641: INFO: Pod "nginx-deployment-5c98f8fb5-f6tjf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f6tjf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-f6tjf,UID:44ca85fa-e86e-11ea-a485-0242ac120004,ResourceVersion:2643506,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187bb77 0x400187bb78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187bbf0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187bc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.642: INFO: Pod "nginx-deployment-5c98f8fb5-fd2wx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fd2wx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-fd2wx,UID:3f971d39-e86e-11ea-a485-0242ac120004,ResourceVersion:2643534,Generation:0,CreationTimestamp:2020-08-27 14:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187bcf0 0x400187bcf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187bd70} {node.kubernetes.io/unreachable Exists NoExecute 0x400187bd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:39 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.161,StartTime:2020-08-27 14:04:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.643: INFO: Pod "nginx-deployment-5c98f8fb5-lhbt7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lhbt7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-lhbt7,UID:44ea6aed-e86e-11ea-a485-0242ac120004,ResourceVersion:2643500,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187be70 0x400187be71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400187bef0} {node.kubernetes.io/unreachable Exists NoExecute 0x400187bf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.644: INFO: Pod "nginx-deployment-5c98f8fb5-lqsmd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lqsmd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-lqsmd,UID:44e0cb82-e86e-11ea-a485-0242ac120004,ResourceVersion:2643526,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x400187bf87 0x400187bf88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40018886f0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001888710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 14:04:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.645: INFO: Pod "nginx-deployment-5c98f8fb5-sq2bj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sq2bj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-sq2bj,UID:3f3ee2ec-e86e-11ea-a485-0242ac120004,ResourceVersion:2643447,Generation:0,CreationTimestamp:2020-08-27 14:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x40018887e0 0x40018887e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001888b60} {node.kubernetes.io/unreachable Exists NoExecute 0x4001888b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:39 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.160,StartTime:2020-08-27 14:04:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.646: INFO: Pod "nginx-deployment-5c98f8fb5-vmkl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vmkl2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-5c98f8fb5-vmkl2,UID:3f972699-e86e-11ea-a485-0242ac120004,ResourceVersion:2643449,Generation:0,CreationTimestamp:2020-08-27 14:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3f1b6990-e86e-11ea-a485-0242ac120004 0x4001888c70 0x4001888c71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001888fc0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001888fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:39 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.250,StartTime:2020-08-27 14:04:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.647: INFO: Pod "nginx-deployment-85ddf47c5d-26p7n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-26p7n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-26p7n,UID:44e0ebb4-e86e-11ea-a485-0242ac120004,ResourceVersion:2643537,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x40018890e0 0x40018890e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001889400} {node.kubernetes.io/unreachable Exists NoExecute 0x4001889420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.648: INFO: Pod "nginx-deployment-85ddf47c5d-2chrt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2chrt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-2chrt,UID:44c49604-e86e-11ea-a485-0242ac120004,ResourceVersion:2643508,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x40018894f7 0x40018894f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001889710} {node.kubernetes.io/unreachable Exists NoExecute 0x4001889740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 14:04:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.649: INFO: Pod "nginx-deployment-85ddf47c5d-4pqnl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4pqnl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-4pqnl,UID:2fff2795-e86e-11ea-a485-0242ac120004,ResourceVersion:2643344,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x40018897f7 0x40018897f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40018898b0} {node.kubernetes.io/unreachable Exists NoExecute 0x40018898e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.159,StartTime:2020-08-27 14:04:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://477f04426b3e90ba1f0e8308b6e0b220983302ae62e981677eab46fa83e07d8b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.649: INFO: Pod "nginx-deployment-85ddf47c5d-7jcwd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7jcwd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-7jcwd,UID:44e0e5f6-e86e-11ea-a485-0242ac120004,ResourceVersion:2643547,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001889a17 0x4001889a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001889a90} {node.kubernetes.io/unreachable Exists NoExecute 0x4001889ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.651: INFO: Pod "nginx-deployment-85ddf47c5d-84jdx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-84jdx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-84jdx,UID:2ff6a292-e86e-11ea-a485-0242ac120004,ResourceVersion:2643351,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001889b67 0x4001889b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001889c70} {node.kubernetes.io/unreachable Exists NoExecute 0x4001889ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.245,StartTime:2020-08-27 14:04:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://62af8f1b3f137dcaff15b5ce4bb6d2deff504b92f7d37a149ca819af3e739ee0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.652: INFO: Pod "nginx-deployment-85ddf47c5d-8clrp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8clrp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-8clrp,UID:44f3ae1b-e86e-11ea-a485-0242ac120004,ResourceVersion:2643511,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001889dd7 0x4001889dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001889e50} {node.kubernetes.io/unreachable Exists NoExecute 0x4001889e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.653: INFO: Pod "nginx-deployment-85ddf47c5d-9fst6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9fst6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-9fst6,UID:44f3b527-e86e-11ea-a485-0242ac120004,ResourceVersion:2643512,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001889ef7 0x4001889ef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001950120} {node.kubernetes.io/unreachable Exists NoExecute 0x4001950160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.653: INFO: Pod "nginx-deployment-85ddf47c5d-9ljz9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9ljz9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-9ljz9,UID:44ea2174-e86e-11ea-a485-0242ac120004,ResourceVersion:2643502,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001950237 0x4001950238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001950440} {node.kubernetes.io/unreachable Exists NoExecute 0x4001950470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.654: INFO: Pod "nginx-deployment-85ddf47c5d-b9wmf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b9wmf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-b9wmf,UID:2ff69d68-e86e-11ea-a485-0242ac120004,ResourceVersion:2643312,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x40019508f7 0x40019508f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001950ae0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001950b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.155,StartTime:2020-08-27 14:04:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b3a3e1c45a8d47fca42d1c436b8976e1c017211e1b972355df66e84ef23b2a53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.655: INFO: Pod "nginx-deployment-85ddf47c5d-g99dh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g99dh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-g99dh,UID:44ea0772-e86e-11ea-a485-0242ac120004,ResourceVersion:2643501,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001951487 0x4001951488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40019515c0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001951680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.656: INFO: Pod "nginx-deployment-85ddf47c5d-jksqt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jksqt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-jksqt,UID:2fff238e-e86e-11ea-a485-0242ac120004,ResourceVersion:2643336,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001951c37 0x4001951c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001951d90} {node.kubernetes.io/unreachable Exists NoExecute 0x4001951db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.246,StartTime:2020-08-27 14:04:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9d47393120b6961331055e973e4f9642ccde66ff1e5faaa87d7ba20af616b5fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.657: INFO: Pod "nginx-deployment-85ddf47c5d-lwzds" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lwzds,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-lwzds,UID:44f3b321-e86e-11ea-a485-0242ac120004,ResourceVersion:2643515,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8077 0x4001da8078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da80f0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.657: INFO: Pod "nginx-deployment-85ddf47c5d-mf6s5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mf6s5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-mf6s5,UID:2ff9b528-e86e-11ea-a485-0242ac120004,ResourceVersion:2643356,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8197 0x4001da8198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8350} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.248,StartTime:2020-08-27 14:04:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://813d814c9119b66ac92257e6f6515d0a744dea7d7e4b0257b26f9a34672d0d4b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.658: INFO: Pod "nginx-deployment-85ddf47c5d-mh57t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mh57t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-mh57t,UID:2ff9b7ff-e86e-11ea-a485-0242ac120004,ResourceVersion:2643327,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8447 0x4001da8448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8550} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.156,StartTime:2020-08-27 14:04:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://daee48a974143597cc10995ec0300aa4c3b151928f3b8438ec839866fb9cc163}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.659: INFO: Pod "nginx-deployment-85ddf47c5d-qc4mt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qc4mt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-qc4mt,UID:44ea0450-e86e-11ea-a485-0242ac120004,ResourceVersion:2643554,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8697 0x4001da8698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8710} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.660: INFO: Pod "nginx-deployment-85ddf47c5d-sjzlp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sjzlp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-sjzlp,UID:2ff9da93-e86e-11ea-a485-0242ac120004,ResourceVersion:2643343,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da87e7 0x4001da87e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8860} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.247,StartTime:2020-08-27 14:04:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e647746245f4bd9d42b38d01a1c99d36605b525c8fa51fc565e433e6bc3cf79d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.661: INFO: Pod "nginx-deployment-85ddf47c5d-vvrv7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vvrv7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-vvrv7,UID:2fe80ddc-e86e-11ea-a485-0242ac120004,ResourceVersion:2643313,Generation:0,CreationTimestamp:2020-08-27 14:04:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8947 0x4001da8948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da89c0} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da89e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:13 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.244,StartTime:2020-08-27 14:04:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 14:04:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7466415c95ac20fae2b26e6dde2e062e471a91518736e6f9ca568cf5b592c21d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.662: INFO: Pod "nginx-deployment-85ddf47c5d-wb9x2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wb9x2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-wb9x2,UID:44ea1d13-e86e-11ea-a485-0242ac120004,ResourceVersion:2643555,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8aa7 0x4001da8aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8b20} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-27 14:04:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.663: INFO: Pod "nginx-deployment-85ddf47c5d-xd9zf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xd9zf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-xd9zf,UID:44f39bbb-e86e-11ea-a485-0242ac120004,ResourceVersion:2643514,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8bf7 0x4001da8bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8c70} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 27 14:04:51.663: INFO: Pod "nginx-deployment-85ddf47c5d-z42q9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z42q9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-st978,SelfLink:/api/v1/namespaces/e2e-tests-deployment-st978/pods/nginx-deployment-85ddf47c5d-z42q9,UID:44f3ae91-e86e-11ea-a485-0242ac120004,ResourceVersion:2643513,Generation:0,CreationTimestamp:2020-08-27 14:04:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2fdc8b2f-e86e-11ea-a485-0242ac120004 0x4001da8d07 0x4001da8d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pz5n9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pz5n9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pz5n9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4001da8d80} {node.kubernetes.io/unreachable Exists NoExecute 0x4001da8da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:04:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:04:51.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-st978" for this suite. Aug 27 14:06:02.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:06:02.081: INFO: namespace: e2e-tests-deployment-st978, resource: bindings, ignored listing per whitelist Aug 27 14:06:02.185: INFO: namespace e2e-tests-deployment-st978 deletion completed in 1m9.655074443s • [SLOW TEST:109.324 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:06:02.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-70d2140a-e86e-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:06:02.586: INFO: Waiting up to 5m0s for pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-7pt7g" to be "success or failure" Aug 27 14:06:02.598: INFO: Pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.297609ms Aug 27 14:06:04.680: INFO: Pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093058513s Aug 27 14:06:06.686: INFO: Pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099142725s Aug 27 14:06:08.729: INFO: Pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142478425s STEP: Saw pod success Aug 27 14:06:08.729: INFO: Pod "pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:06:08.943: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d container secret-volume-test: STEP: delete the pod Aug 27 14:06:08.993: INFO: Waiting for pod pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d to disappear Aug 27 14:06:09.002: INFO: Pod pod-secrets-70d37660-e86e-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:06:09.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7pt7g" for this suite. Aug 27 14:06:17.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:06:17.555: INFO: namespace: e2e-tests-secrets-7pt7g, resource: bindings, ignored listing per whitelist Aug 27 14:06:17.560: INFO: namespace e2e-tests-secrets-7pt7g deletion completed in 8.521639769s • [SLOW TEST:15.374 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:06:17.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 27 14:06:18.556: INFO: Waiting up to 5m0s for pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-gx492" to be "success or failure" Aug 27 14:06:18.631: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 75.101586ms Aug 27 14:06:21.172: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.616004159s Aug 27 14:06:23.334: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.77813757s Aug 27 14:06:25.340: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.784733752s Aug 27 14:06:27.369: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813272844s Aug 27 14:06:29.381: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 10.824940155s Aug 27 14:06:31.471: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.915062839s STEP: Saw pod success Aug 27 14:06:31.471: INFO: Pod "pod-7a613e69-e86e-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:06:31.692: INFO: Trying to get logs from node hunter-worker pod pod-7a613e69-e86e-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:06:32.016: INFO: Waiting for pod pod-7a613e69-e86e-11ea-894e-0242ac11000d to disappear Aug 27 14:06:32.111: INFO: Pod pod-7a613e69-e86e-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:06:32.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gx492" for this suite. Aug 27 14:06:42.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:06:42.504: INFO: namespace: e2e-tests-emptydir-gx492, resource: bindings, ignored listing per whitelist Aug 27 14:06:42.588: INFO: namespace e2e-tests-emptydir-gx492 deletion completed in 10.403977323s • [SLOW TEST:25.028 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:06:42.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 27 14:06:51.124: INFO: Successfully updated pod "pod-update-activedeadlineseconds-89c66bad-e86e-11ea-894e-0242ac11000d" Aug 27 14:06:51.124: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-89c66bad-e86e-11ea-894e-0242ac11000d" in namespace "e2e-tests-pods-g2npv" to be "terminated due to deadline exceeded" Aug 27 14:06:51.692: INFO: Pod "pod-update-activedeadlineseconds-89c66bad-e86e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 567.672114ms Aug 27 14:06:53.731: INFO: Pod "pod-update-activedeadlineseconds-89c66bad-e86e-11ea-894e-0242ac11000d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.606316071s Aug 27 14:06:53.731: INFO: Pod "pod-update-activedeadlineseconds-89c66bad-e86e-11ea-894e-0242ac11000d" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:06:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-g2npv" for this suite. Aug 27 14:07:00.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:07:00.466: INFO: namespace: e2e-tests-pods-g2npv, resource: bindings, ignored listing per whitelist Aug 27 14:07:00.496: INFO: namespace e2e-tests-pods-g2npv deletion completed in 6.7578089s • [SLOW TEST:17.905 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:07:00.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 27 14:07:00.638: INFO: namespace e2e-tests-kubectl-pwvsk Aug 27 14:07:00.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pwvsk' Aug 27 14:07:10.168: INFO: stderr: "" Aug 27 14:07:10.168: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 27 14:07:11.214: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:11.214: INFO: Found 0 / 1 Aug 27 14:07:12.394: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:12.394: INFO: Found 0 / 1 Aug 27 14:07:13.202: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:13.202: INFO: Found 0 / 1 Aug 27 14:07:14.178: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:14.178: INFO: Found 0 / 1 Aug 27 14:07:15.227: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:15.227: INFO: Found 0 / 1 Aug 27 14:07:16.255: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:16.255: INFO: Found 0 / 1 Aug 27 14:07:17.244: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:17.244: INFO: Found 0 / 1 Aug 27 14:07:18.192: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:18.193: INFO: Found 0 / 1 Aug 27 14:07:19.178: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:19.178: INFO: Found 0 / 1 Aug 27 14:07:20.893: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:20.893: INFO: Found 1 / 1 Aug 27 14:07:20.893: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 27 14:07:20.930: INFO: Selector matched 1 pods for map[app:redis] Aug 27 14:07:20.930: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 27 14:07:20.930: INFO: wait on redis-master startup in e2e-tests-kubectl-pwvsk Aug 27 14:07:20.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ggs5r redis-master --namespace=e2e-tests-kubectl-pwvsk' Aug 27 14:07:22.231: INFO: stderr: "" Aug 27 14:07:22.232: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Aug 14:07:19.217 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 14:07:19.217 # Server started, Redis version 3.2.12\n1:M 27 Aug 14:07:19.217 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 14:07:19.217 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Aug 27 14:07:22.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-pwvsk' Aug 27 14:07:23.951: INFO: stderr: "" Aug 27 14:07:23.951: INFO: stdout: "service/rm2 exposed\n" Aug 27 14:07:24.209: INFO: Service rm2 in namespace e2e-tests-kubectl-pwvsk found. STEP: exposing service Aug 27 14:07:26.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-pwvsk' Aug 27 14:07:27.699: INFO: stderr: "" Aug 27 14:07:27.699: INFO: stdout: "service/rm3 exposed\n" Aug 27 14:07:27.748: INFO: Service rm3 in namespace e2e-tests-kubectl-pwvsk found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:07:29.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pwvsk" for this suite. Aug 27 14:07:53.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:07:54.189: INFO: namespace: e2e-tests-kubectl-pwvsk, resource: bindings, ignored listing per whitelist Aug 27 14:07:54.191: INFO: namespace e2e-tests-kubectl-pwvsk deletion completed in 24.424934511s • [SLOW TEST:53.695 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:07:54.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:07:54.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-zkv5l" to be "success or failure" Aug 27 14:07:54.547: INFO: Pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.610938ms Aug 27 14:07:56.682: INFO: Pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173358389s Aug 27 14:07:58.687: INFO: Pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178148488s Aug 27 14:08:00.693: INFO: Pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183924597s STEP: Saw pod success Aug 27 14:08:00.693: INFO: Pod "downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:08:00.698: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:08:00.848: INFO: Waiting for pod downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d to disappear Aug 27 14:08:00.852: INFO: Pod downwardapi-volume-b38ae5d9-e86e-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:08:00.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zkv5l" for this suite. Aug 27 14:08:08.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:08:09.342: INFO: namespace: e2e-tests-downward-api-zkv5l, resource: bindings, ignored listing per whitelist Aug 27 14:08:09.365: INFO: namespace e2e-tests-downward-api-zkv5l deletion completed in 8.505829313s • [SLOW TEST:15.173 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:08:09.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 27 14:08:10.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-j84cg' Aug 27 14:08:12.223: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 27 14:08:12.223: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 27 14:08:12.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-j84cg' Aug 27 14:08:13.918: INFO: stderr: "" Aug 27 14:08:13.918: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:08:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j84cg" for this suite. Aug 27 14:08:20.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:08:20.408: INFO: namespace: e2e-tests-kubectl-j84cg, resource: bindings, ignored listing per whitelist Aug 27 14:08:20.433: INFO: namespace e2e-tests-kubectl-j84cg deletion completed in 6.253072797s • [SLOW TEST:11.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:08:20.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 27 14:08:32.088: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:32.120: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:34.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:34.126: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:36.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:36.125: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:38.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:38.126: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:40.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:40.124: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:42.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:42.125: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:44.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:44.126: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:46.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:46.125: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:48.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:48.137: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:50.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:50.125: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:52.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:52.837: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:54.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:54.322: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:56.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:56.675: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:08:58.122: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:08:58.243: INFO: Pod pod-with-prestop-exec-hook still exists Aug 27 14:09:00.120: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 27 14:09:00.239: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:09:00.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7qrk2" for this suite. Aug 27 14:09:24.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:09:24.374: INFO: namespace: e2e-tests-container-lifecycle-hook-7qrk2, resource: bindings, ignored listing per whitelist Aug 27 14:09:24.397: INFO: namespace e2e-tests-container-lifecycle-hook-7qrk2 deletion completed in 24.133847345s • [SLOW TEST:63.963 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:09:24.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d Aug 27 14:09:24.639: INFO: Pod name my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d: Found 0 pods out of 1 Aug 27 14:09:29.645: INFO: Pod name my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d: Found 1 pods out of 1 Aug 27 14:09:29.645: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d" are running Aug 27 14:09:29.649: INFO: Pod "my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d-7dfrs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:09:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:09:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:09:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:09:24 +0000 UTC Reason: Message:}]) Aug 27 14:09:29.649: INFO: Trying to dial the pod Aug 27 14:09:34.844: INFO: Controller my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d: Got expected result from replica 1 [my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d-7dfrs]: "my-hostname-basic-e941c974-e86e-11ea-894e-0242ac11000d-7dfrs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:09:34.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-w54z6" for this suite. Aug 27 14:09:44.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:09:45.141: INFO: namespace: e2e-tests-replication-controller-w54z6, resource: bindings, ignored listing per whitelist Aug 27 14:09:45.184: INFO: namespace e2e-tests-replication-controller-w54z6 deletion completed in 10.333311307s • [SLOW TEST:20.785 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:09:45.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-tht7z/secret-test-f5fe7e20-e86e-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:09:46.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-tht7z" to be "success or failure" Aug 27 14:09:46.382: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 181.63966ms Aug 27 14:09:48.387: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187064098s Aug 27 14:09:50.882: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682098078s Aug 27 14:09:52.946: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.745692725s Aug 27 14:09:54.951: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750728886s Aug 27 14:09:56.957: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.756612397s STEP: Saw pod success Aug 27 14:09:56.957: INFO: Pod "pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:09:56.961: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d container env-test: STEP: delete the pod Aug 27 14:09:57.020: INFO: Waiting for pod pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d to disappear Aug 27 14:09:57.034: INFO: Pod pod-configmaps-f6207dfc-e86e-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:09:57.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tht7z" for this suite. Aug 27 14:10:05.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:10:05.307: INFO: namespace: e2e-tests-secrets-tht7z, resource: bindings, ignored listing per whitelist Aug 27 14:10:05.389: INFO: namespace e2e-tests-secrets-tht7z deletion completed in 8.346394941s • [SLOW TEST:20.204 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:10:05.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-fqc56 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Aug 27 14:10:06.300: INFO: Found 0 stateful pods, waiting for 3 Aug 27 14:10:16.307: INFO: Found 2 stateful pods, waiting for 3 Aug 27 14:10:26.308: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:10:26.308: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:10:26.308: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 27 14:10:26.349: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 27 14:10:36.840: INFO: Updating stateful set ss2 Aug 27 14:10:36.880: INFO: Waiting for Pod e2e-tests-statefulset-fqc56/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 14:10:46.893: INFO: Waiting for Pod e2e-tests-statefulset-fqc56/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Aug 27 14:10:58.358: INFO: Found 2 stateful pods, waiting for 3 Aug 27 14:11:08.367: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:11:08.367: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:11:08.368: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 27 14:11:08.400: INFO: Updating stateful set ss2 Aug 27 14:11:08.436: INFO: Waiting for Pod e2e-tests-statefulset-fqc56/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 14:11:18.610: INFO: Updating stateful set ss2 Aug 27 14:11:18.661: INFO: Waiting for StatefulSet e2e-tests-statefulset-fqc56/ss2 to complete update Aug 27 14:11:18.662: INFO: Waiting for Pod e2e-tests-statefulset-fqc56/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 27 14:11:28.676: INFO: Waiting for StatefulSet e2e-tests-statefulset-fqc56/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 27 14:11:38.677: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fqc56 Aug 27 14:11:38.681: INFO: Scaling statefulset ss2 to 0 Aug 27 14:12:08.707: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:12:08.710: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:12:08.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-fqc56" for this suite. Aug 27 14:12:16.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:12:16.931: INFO: namespace: e2e-tests-statefulset-fqc56, resource: bindings, ignored listing per whitelist Aug 27 14:12:16.934: INFO: namespace e2e-tests-statefulset-fqc56 deletion completed in 8.139324261s • [SLOW TEST:131.545 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:12:16.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6sfdr [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6sfdr STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6sfdr Aug 27 14:12:17.191: INFO: Found 0 stateful pods, waiting for 1 Aug 27 14:12:27.201: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 27 14:12:27.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:12:28.708: INFO: stderr: "I0827 14:12:28.547521 428 log.go:172] (0x40006f4370) (0x400071e640) Create stream\nI0827 14:12:28.551560 428 log.go:172] (0x40006f4370) (0x400071e640) Stream added, broadcasting: 1\nI0827 14:12:28.568468 428 log.go:172] (0x40006f4370) Reply frame received for 1\nI0827 14:12:28.569720 428 log.go:172] (0x40006f4370) (0x400064edc0) Create stream\nI0827 14:12:28.569849 428 log.go:172] (0x40006f4370) (0x400064edc0) Stream added, broadcasting: 3\nI0827 14:12:28.571529 428 log.go:172] (0x40006f4370) Reply frame received for 3\nI0827 14:12:28.571726 428 log.go:172] (0x40006f4370) (0x400071e6e0) Create stream\nI0827 14:12:28.571784 428 log.go:172] (0x40006f4370) (0x400071e6e0) Stream added, broadcasting: 5\nI0827 14:12:28.572979 428 log.go:172] (0x40006f4370) Reply frame received for 5\nI0827 14:12:28.690722 428 log.go:172] (0x40006f4370) Data frame received for 5\nI0827 14:12:28.690971 428 log.go:172] (0x400071e6e0) (5) Data frame handling\nI0827 14:12:28.691142 428 log.go:172] (0x40006f4370) Data frame received for 3\nI0827 14:12:28.691376 428 log.go:172] (0x400064edc0) (3) Data frame handling\nI0827 14:12:28.692153 428 log.go:172] (0x40006f4370) Data frame received for 1\nI0827 14:12:28.692361 428 log.go:172] (0x400071e640) (1) Data frame handling\nI0827 14:12:28.693505 428 log.go:172] (0x400071e640) (1) Data frame sent\nI0827 14:12:28.693615 428 log.go:172] (0x400064edc0) (3) Data frame sent\nI0827 14:12:28.693720 428 log.go:172] (0x40006f4370) Data frame received for 3\nI0827 14:12:28.693789 428 log.go:172] (0x400064edc0) (3) Data frame handling\nI0827 14:12:28.695346 428 log.go:172] (0x40006f4370) (0x400071e640) Stream removed, broadcasting: 1\nI0827 14:12:28.699059 428 log.go:172] (0x40006f4370) Go away received\nI0827 14:12:28.701776 428 log.go:172] (0x40006f4370) (0x400071e640) Stream removed, broadcasting: 1\nI0827 14:12:28.702275 428 log.go:172] (0x40006f4370) (0x400064edc0) Stream removed, broadcasting: 3\nI0827 14:12:28.702505 428 log.go:172] (0x40006f4370) (0x400071e6e0) Stream removed, broadcasting: 5\n" Aug 27 14:12:28.709: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:12:28.710: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:12:28.716: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 27 14:12:38.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:12:38.724: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:12:38.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999863988s Aug 27 14:12:39.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.891986361s Aug 27 14:12:40.962: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.885733968s Aug 27 14:12:42.358: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.775107567s Aug 27 14:12:43.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.379736366s Aug 27 14:12:44.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.342349638s Aug 27 14:12:45.407: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.33634943s Aug 27 14:12:46.414: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.330549142s Aug 27 14:12:47.422: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.322953223s Aug 27 14:12:48.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 315.680729ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6sfdr Aug 27 14:12:49.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:12:50.844: INFO: stderr: "I0827 14:12:50.750709 451 log.go:172] (0x4000166840) (0x400063f4a0) Create stream\nI0827 14:12:50.753974 451 log.go:172] (0x4000166840) (0x400063f4a0) Stream added, broadcasting: 1\nI0827 14:12:50.766292 451 log.go:172] (0x4000166840) Reply frame received for 1\nI0827 14:12:50.767506 451 log.go:172] (0x4000166840) (0x4000790000) Create stream\nI0827 14:12:50.767654 451 log.go:172] (0x4000166840) (0x4000790000) Stream added, broadcasting: 3\nI0827 14:12:50.770057 451 log.go:172] (0x4000166840) Reply frame received for 3\nI0827 14:12:50.770561 451 log.go:172] (0x4000166840) (0x4000790140) Create stream\nI0827 14:12:50.770694 451 log.go:172] (0x4000166840) (0x4000790140) Stream added, broadcasting: 5\nI0827 14:12:50.772390 451 log.go:172] (0x4000166840) Reply frame received for 5\nI0827 14:12:50.825608 451 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:12:50.825791 451 log.go:172] (0x4000790000) (3) Data frame handling\nI0827 14:12:50.825891 451 log.go:172] (0x4000166840) Data frame received for 5\nI0827 14:12:50.825978 451 log.go:172] (0x4000790140) (5) Data frame handling\nI0827 14:12:50.826556 451 log.go:172] (0x4000166840) Data frame received for 1\nI0827 14:12:50.826659 451 log.go:172] (0x400063f4a0) (1) Data frame handling\nI0827 14:12:50.826775 451 log.go:172] (0x4000790000) (3) Data frame sent\nI0827 14:12:50.827141 451 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:12:50.827208 451 log.go:172] (0x4000790000) (3) Data frame handling\nI0827 14:12:50.827390 451 log.go:172] (0x400063f4a0) (1) Data frame sent\nI0827 14:12:50.828983 451 log.go:172] (0x4000166840) (0x400063f4a0) Stream removed, broadcasting: 1\nI0827 14:12:50.830276 451 log.go:172] (0x4000166840) Go away received\nI0827 14:12:50.831993 451 log.go:172] (0x4000166840) (0x400063f4a0) Stream removed, broadcasting: 1\nI0827 14:12:50.832350 451 log.go:172] (0x4000166840) (0x4000790000) Stream removed, broadcasting: 3\nI0827 14:12:50.832952 451 log.go:172] (0x4000166840) (0x4000790140) Stream removed, broadcasting: 5\n" Aug 27 14:12:50.845: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:12:50.845: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:12:50.851: INFO: Found 1 stateful pods, waiting for 3 Aug 27 14:13:00.861: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:13:00.861: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:13:00.861: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 27 14:13:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:13:02.296: INFO: stderr: "I0827 14:13:02.197589 475 log.go:172] (0x4000720370) (0x400060f4a0) Create stream\nI0827 14:13:02.201078 475 log.go:172] (0x4000720370) (0x400060f4a0) Stream added, broadcasting: 1\nI0827 14:13:02.214152 475 log.go:172] (0x4000720370) Reply frame received for 1\nI0827 14:13:02.215056 475 log.go:172] (0x4000720370) (0x4000708000) Create stream\nI0827 14:13:02.215167 475 log.go:172] (0x4000720370) (0x4000708000) Stream added, broadcasting: 3\nI0827 14:13:02.217244 475 log.go:172] (0x4000720370) Reply frame received for 3\nI0827 14:13:02.217599 475 log.go:172] (0x4000720370) (0x40007080a0) Create stream\nI0827 14:13:02.217711 475 log.go:172] (0x4000720370) (0x40007080a0) Stream added, broadcasting: 5\nI0827 14:13:02.218915 475 log.go:172] (0x4000720370) Reply frame received for 5\nI0827 14:13:02.275025 475 log.go:172] (0x4000720370) Data frame received for 1\nI0827 14:13:02.275305 475 log.go:172] (0x4000720370) Data frame received for 3\nI0827 14:13:02.275694 475 log.go:172] (0x4000720370) Data frame received for 5\nI0827 14:13:02.276513 475 log.go:172] (0x40007080a0) (5) Data frame handling\nI0827 14:13:02.276678 475 log.go:172] (0x400060f4a0) (1) Data frame handling\nI0827 14:13:02.276820 475 log.go:172] (0x4000708000) (3) Data frame handling\nI0827 14:13:02.279298 475 log.go:172] (0x4000708000) (3) Data frame sent\nI0827 14:13:02.279934 475 log.go:172] (0x4000720370) Data frame received for 3\nI0827 14:13:02.280033 475 log.go:172] (0x4000708000) (3) Data frame handling\nI0827 14:13:02.280234 475 log.go:172] (0x400060f4a0) (1) Data frame sent\nI0827 14:13:02.280704 475 log.go:172] (0x4000720370) (0x400060f4a0) Stream removed, broadcasting: 1\nI0827 14:13:02.282318 475 log.go:172] (0x4000720370) Go away received\nI0827 14:13:02.284352 475 log.go:172] (0x4000720370) (0x400060f4a0) Stream removed, broadcasting: 1\nI0827 14:13:02.284630 475 log.go:172] (0x4000720370) (0x4000708000) Stream removed, broadcasting: 3\nI0827 14:13:02.284867 475 log.go:172] (0x4000720370) (0x40007080a0) Stream removed, broadcasting: 5\n" Aug 27 14:13:02.297: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:13:02.297: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:13:02.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:13:03.757: INFO: stderr: "I0827 14:13:03.593896 498 log.go:172] (0x400015c840) (0x40007e6640) Create stream\nI0827 14:13:03.599219 498 log.go:172] (0x400015c840) (0x40007e6640) Stream added, broadcasting: 1\nI0827 14:13:03.615716 498 log.go:172] (0x400015c840) Reply frame received for 1\nI0827 14:13:03.616465 498 log.go:172] (0x400015c840) (0x40005aadc0) Create stream\nI0827 14:13:03.616561 498 log.go:172] (0x400015c840) (0x40005aadc0) Stream added, broadcasting: 3\nI0827 14:13:03.618265 498 log.go:172] (0x400015c840) Reply frame received for 3\nI0827 14:13:03.618451 498 log.go:172] (0x400015c840) (0x40007e66e0) Create stream\nI0827 14:13:03.618505 498 log.go:172] (0x400015c840) (0x40007e66e0) Stream added, broadcasting: 5\nI0827 14:13:03.619503 498 log.go:172] (0x400015c840) Reply frame received for 5\nI0827 14:13:03.729795 498 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:03.730080 498 log.go:172] (0x40005aadc0) (3) Data frame handling\nI0827 14:13:03.730358 498 log.go:172] (0x400015c840) Data frame received for 5\nI0827 14:13:03.730559 498 log.go:172] (0x40007e66e0) (5) Data frame handling\nI0827 14:13:03.730750 498 log.go:172] (0x40005aadc0) (3) Data frame sent\nI0827 14:13:03.733340 498 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:03.733441 498 log.go:172] (0x40005aadc0) (3) Data frame handling\nI0827 14:13:03.739234 498 log.go:172] (0x400015c840) Data frame received for 1\nI0827 14:13:03.739328 498 log.go:172] (0x40007e6640) (1) Data frame handling\nI0827 14:13:03.739430 498 log.go:172] (0x40007e6640) (1) Data frame sent\nI0827 14:13:03.741611 498 log.go:172] (0x400015c840) (0x40007e6640) Stream removed, broadcasting: 1\nI0827 14:13:03.741939 498 log.go:172] (0x400015c840) Go away received\nI0827 14:13:03.744502 498 log.go:172] (0x400015c840) (0x40007e6640) Stream removed, broadcasting: 1\nI0827 14:13:03.744720 498 log.go:172] (0x400015c840) (0x40005aadc0) Stream removed, broadcasting: 3\nI0827 14:13:03.744954 498 log.go:172] (0x400015c840) (0x40007e66e0) Stream removed, broadcasting: 5\n" Aug 27 14:13:03.757: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:13:03.757: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:13:03.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:13:05.178: INFO: stderr: "I0827 14:13:05.015553 522 log.go:172] (0x4000152840) (0x400065b400) Create stream\nI0827 14:13:05.018324 522 log.go:172] (0x4000152840) (0x400065b400) Stream added, broadcasting: 1\nI0827 14:13:05.030745 522 log.go:172] (0x4000152840) Reply frame received for 1\nI0827 14:13:05.032036 522 log.go:172] (0x4000152840) (0x400065b4a0) Create stream\nI0827 14:13:05.032181 522 log.go:172] (0x4000152840) (0x400065b4a0) Stream added, broadcasting: 3\nI0827 14:13:05.034604 522 log.go:172] (0x4000152840) Reply frame received for 3\nI0827 14:13:05.035037 522 log.go:172] (0x4000152840) (0x4000736000) Create stream\nI0827 14:13:05.035149 522 log.go:172] (0x4000152840) (0x4000736000) Stream added, broadcasting: 5\nI0827 14:13:05.036951 522 log.go:172] (0x4000152840) Reply frame received for 5\nI0827 14:13:05.154075 522 log.go:172] (0x4000152840) Data frame received for 5\nI0827 14:13:05.154358 522 log.go:172] (0x4000152840) Data frame received for 3\nI0827 14:13:05.154541 522 log.go:172] (0x400065b4a0) (3) Data frame handling\nI0827 14:13:05.154955 522 log.go:172] (0x4000736000) (5) Data frame handling\nI0827 14:13:05.155363 522 log.go:172] (0x4000152840) Data frame received for 1\nI0827 14:13:05.155451 522 log.go:172] (0x400065b400) (1) Data frame handling\nI0827 14:13:05.155909 522 log.go:172] (0x400065b4a0) (3) Data frame sent\nI0827 14:13:05.156069 522 log.go:172] (0x400065b400) (1) Data frame sent\nI0827 14:13:05.158410 522 log.go:172] (0x4000152840) Data frame received for 3\nI0827 14:13:05.158505 522 log.go:172] (0x400065b4a0) (3) Data frame handling\nI0827 14:13:05.160059 522 log.go:172] (0x4000152840) (0x400065b400) Stream removed, broadcasting: 1\nI0827 14:13:05.162573 522 log.go:172] (0x4000152840) Go away received\nI0827 14:13:05.164827 522 log.go:172] (0x4000152840) (0x400065b400) Stream removed, broadcasting: 1\nI0827 14:13:05.165076 522 log.go:172] (0x4000152840) (0x400065b4a0) Stream removed, broadcasting: 3\nI0827 14:13:05.165288 522 log.go:172] (0x4000152840) (0x4000736000) Stream removed, broadcasting: 5\n" Aug 27 14:13:05.178: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:13:05.178: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:13:05.179: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:13:05.185: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 27 14:13:15.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:13:15.413: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:13:15.413: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:13:15.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999983472s Aug 27 14:13:16.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.743825452s Aug 27 14:13:17.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.713880208s Aug 27 14:13:18.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.706832311s Aug 27 14:13:19.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.524617704s Aug 27 14:13:20.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.516355803s Aug 27 14:13:21.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.506065877s Aug 27 14:13:22.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.485628496s Aug 27 14:13:23.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.475853693s Aug 27 14:13:24.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 465.75587ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6sfdr Aug 27 14:13:25.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:13:27.406: INFO: stderr: "I0827 14:13:27.269196 545 log.go:172] (0x400015c840) (0x4000665360) Create stream\nI0827 14:13:27.275117 545 log.go:172] (0x400015c840) (0x4000665360) Stream added, broadcasting: 1\nI0827 14:13:27.292127 545 log.go:172] (0x400015c840) Reply frame received for 1\nI0827 14:13:27.293819 545 log.go:172] (0x400015c840) (0x4000665400) Create stream\nI0827 14:13:27.294040 545 log.go:172] (0x400015c840) (0x4000665400) Stream added, broadcasting: 3\nI0827 14:13:27.296463 545 log.go:172] (0x400015c840) Reply frame received for 3\nI0827 14:13:27.296925 545 log.go:172] (0x400015c840) (0x40006b0000) Create stream\nI0827 14:13:27.297048 545 log.go:172] (0x400015c840) (0x40006b0000) Stream added, broadcasting: 5\nI0827 14:13:27.298588 545 log.go:172] (0x400015c840) Reply frame received for 5\nI0827 14:13:27.389216 545 log.go:172] (0x400015c840) Data frame received for 5\nI0827 14:13:27.389697 545 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:27.389909 545 log.go:172] (0x4000665400) (3) Data frame handling\nI0827 14:13:27.390062 545 log.go:172] (0x40006b0000) (5) Data frame handling\nI0827 14:13:27.391452 545 log.go:172] (0x400015c840) Data frame received for 1\nI0827 14:13:27.391596 545 log.go:172] (0x4000665360) (1) Data frame handling\nI0827 14:13:27.392042 545 log.go:172] (0x4000665400) (3) Data frame sent\nI0827 14:13:27.392670 545 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:27.392888 545 log.go:172] (0x4000665400) (3) Data frame handling\nI0827 14:13:27.392998 545 log.go:172] (0x4000665360) (1) Data frame sent\nI0827 14:13:27.394831 545 log.go:172] (0x400015c840) (0x4000665360) Stream removed, broadcasting: 1\nI0827 14:13:27.398330 545 log.go:172] (0x400015c840) Go away received\nI0827 14:13:27.399067 545 log.go:172] (0x400015c840) (0x4000665360) Stream removed, broadcasting: 1\nI0827 14:13:27.400170 545 log.go:172] (0x400015c840) (0x4000665400) Stream removed, broadcasting: 3\nI0827 14:13:27.400453 545 log.go:172] (0x400015c840) (0x40006b0000) Stream removed, broadcasting: 5\n" Aug 27 14:13:27.408: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:13:27.408: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:13:27.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:13:28.836: INFO: stderr: "I0827 14:13:28.718861 569 log.go:172] (0x400015c840) (0x40005bf360) Create stream\nI0827 14:13:28.724804 569 log.go:172] (0x400015c840) (0x40005bf360) Stream added, broadcasting: 1\nI0827 14:13:28.740455 569 log.go:172] (0x400015c840) Reply frame received for 1\nI0827 14:13:28.741077 569 log.go:172] (0x400015c840) (0x40007a2000) Create stream\nI0827 14:13:28.741144 569 log.go:172] (0x400015c840) (0x40007a2000) Stream added, broadcasting: 3\nI0827 14:13:28.742892 569 log.go:172] (0x400015c840) Reply frame received for 3\nI0827 14:13:28.743453 569 log.go:172] (0x400015c840) (0x40007a2140) Create stream\nI0827 14:13:28.743628 569 log.go:172] (0x400015c840) (0x40007a2140) Stream added, broadcasting: 5\nI0827 14:13:28.745394 569 log.go:172] (0x400015c840) Reply frame received for 5\nI0827 14:13:28.817448 569 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:28.817681 569 log.go:172] (0x400015c840) Data frame received for 1\nI0827 14:13:28.817962 569 log.go:172] (0x400015c840) Data frame received for 5\nI0827 14:13:28.818152 569 log.go:172] (0x40005bf360) (1) Data frame handling\nI0827 14:13:28.818284 569 log.go:172] (0x40007a2140) (5) Data frame handling\nI0827 14:13:28.819478 569 log.go:172] (0x40007a2000) (3) Data frame handling\nI0827 14:13:28.820161 569 log.go:172] (0x40007a2000) (3) Data frame sent\nI0827 14:13:28.820594 569 log.go:172] (0x40005bf360) (1) Data frame sent\nI0827 14:13:28.820908 569 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:13:28.821046 569 log.go:172] (0x40007a2000) (3) Data frame handling\nI0827 14:13:28.823397 569 log.go:172] (0x400015c840) (0x40005bf360) Stream removed, broadcasting: 1\nI0827 14:13:28.824674 569 log.go:172] (0x400015c840) Go away received\nI0827 14:13:28.826533 569 log.go:172] (0x400015c840) (0x40005bf360) Stream removed, broadcasting: 1\nI0827 14:13:28.826841 569 log.go:172] (0x400015c840) (0x40007a2000) Stream removed, broadcasting: 3\nI0827 14:13:28.827085 569 log.go:172] (0x400015c840) (0x40007a2140) Stream removed, broadcasting: 5\n" Aug 27 14:13:28.837: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:13:28.837: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:13:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:13:30.123: INFO: rc: 1 Aug 27 14:13:30.125: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0x4000c7d380 exit status 1 true [0x40002e7ac8 0x40002e7ae0 0x40002e7b00] [0x40002e7ac8 0x40002e7ae0 0x40002e7b00] [0x40002e7ad8 0x40002e7af8] [0x856db8 0x856db8] 0x400206be00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Aug 27 14:13:40.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:13:41.431: INFO: rc: 1 Aug 27 14:13:41.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x400066bad0 exit status 1 true [0x400043ab70 0x400043aba8 0x400043abc0] [0x400043ab70 0x400043aba8 0x400043abc0] [0x400043ab98 0x400043abb8] [0x856db8 0x856db8] 0x40014517a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:13:51.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:13:52.602: INFO: rc: 1 Aug 27 14:13:52.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f341b0 exit status 1 true [0x4000182000 0x40002e6cd8 0x40002e6d00] [0x4000182000 0x40002e6cd8 0x40002e6d00] [0x40002e6ca0 0x40002e6cf0] [0x856db8 0x856db8] 0x4001b4c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:02.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:03.777: INFO: rc: 1 Aug 27 14:14:03.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001b7a180 exit status 1 true [0x4001930008 0x4001930038 0x4001930050] [0x4001930008 0x4001930038 0x4001930050] [0x4001930030 0x4001930048] [0x856db8 0x856db8] 0x4001e8e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:13.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:14.974: INFO: rc: 1 Aug 27 14:14:14.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e96180 exit status 1 true [0x40011e2000 0x40011e2018 0x40011e2038] [0x40011e2000 0x40011e2018 0x40011e2038] [0x40011e2010 0x40011e2030] [0x856db8 0x856db8] 0x4001a84720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:24.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:26.155: INFO: rc: 1 Aug 27 14:14:26.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e962a0 exit status 1 true [0x40011e2040 0x40011e2058 0x40011e2070] [0x40011e2040 0x40011e2058 0x40011e2070] [0x40011e2050 0x40011e2068] [0x856db8 0x856db8] 0x4001a84a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:36.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:37.324: INFO: rc: 1 Aug 27 14:14:37.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001722120 exit status 1 true [0x400043a090 0x400043a198 0x400043a208] [0x400043a090 0x400043a198 0x400043a208] [0x400043a170 0x400043a1d0] [0x856db8 0x856db8] 0x4001ba4a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:47.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:48.521: INFO: rc: 1 Aug 27 14:14:48.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e963f0 exit status 1 true [0x40011e2078 0x40011e2090 0x40011e20a8] [0x40011e2078 0x40011e2090 0x40011e20a8] [0x40011e2088 0x40011e20a0] [0x856db8 0x856db8] 0x4001a84cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:14:58.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:14:59.724: INFO: rc: 1 Aug 27 14:14:59.724: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001b7a2d0 exit status 1 true [0x4001930060 0x40019300a0 0x40019300b8] [0x4001930060 0x40019300a0 0x40019300b8] [0x4001930088 0x40019300b0] [0x856db8 0x856db8] 0x4001e8e8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:15:09.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:15:10.902: INFO: rc: 1 Aug 27 14:15:10.902: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017222d0 exit status 1 true [0x400043a238 0x400043a328 0x400043a370] [0x400043a238 0x400043a328 0x400043a370] [0x400043a2a8 0x400043a360] [0x856db8 0x856db8] 0x4001ba4d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:15:20.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:15:22.123: INFO: rc: 1 Aug 27 14:15:22.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f34390 exit status 1 true [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d90 0x40002e6db8] [0x856db8 0x856db8] 0x4001b4cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:15:32.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:15:33.335: INFO: rc: 1 Aug 27 14:15:33.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001b7a420 exit status 1 true [0x40019300c0 0x40019300d8 0x40019300f0] [0x40019300c0 0x40019300d8 0x40019300f0] [0x40019300d0 0x40019300e8] [0x856db8 0x856db8] 0x4001e8eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:15:43.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:15:44.578: INFO: rc: 1 Aug 27 14:15:44.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e96510 exit status 1 true [0x40011e20b0 0x40011e20c8 0x40011e20e0] [0x40011e20b0 0x40011e20c8 0x40011e20e0] [0x40011e20c0 0x40011e20d8] [0x856db8 0x856db8] 0x4001a84f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:15:54.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:15:55.769: INFO: rc: 1 Aug 27 14:15:55.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e961b0 exit status 1 true [0x40002e6c98 0x40002e6ce8 0x40002e6d10] [0x40002e6c98 0x40002e6ce8 0x40002e6d10] [0x40002e6cd8 0x40002e6d00] [0x856db8 0x856db8] 0x4001a845a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:16:05.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:16:06.934: INFO: rc: 1 Aug 27 14:16:06.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f341e0 exit status 1 true [0x40011e2000 0x40011e2018 0x40011e2038] [0x40011e2000 0x40011e2018 0x40011e2038] [0x40011e2010 0x40011e2030] [0x856db8 0x856db8] 0x4001e8e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:16:16.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:16:19.192: INFO: rc: 1 Aug 27 14:16:19.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e96330 exit status 1 true [0x40002e6d88 0x40002e6da8 0x40002e6de8] [0x40002e6d88 0x40002e6da8 0x40002e6de8] [0x40002e6da0 0x40002e6dc0] [0x856db8 0x856db8] 0x4001a849c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:16:29.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:16:30.345: INFO: rc: 1 Aug 27 14:16:30.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e96480 exit status 1 true [0x40002e6df0 0x40002e6e20 0x40002e6e80] [0x40002e6df0 0x40002e6e20 0x40002e6e80] [0x40002e6e10 0x40002e6e68] [0x856db8 0x856db8] 0x4001a84c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:16:40.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:16:41.522: INFO: rc: 1 Aug 27 14:16:41.522: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f34360 exit status 1 true [0x40011e2040 0x40011e2058 0x40011e2070] [0x40011e2040 0x40011e2058 0x40011e2070] [0x40011e2050 0x40011e2068] [0x856db8 0x856db8] 0x4001e8e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:16:51.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:16:53.703: INFO: rc: 1 Aug 27 14:16:53.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001722180 exit status 1 true [0x4001930000 0x4001930030 0x4001930048] [0x4001930000 0x4001930030 0x4001930048] [0x4001930028 0x4001930040] [0x856db8 0x856db8] 0x4001b4c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:17:03.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:17:04.802: INFO: rc: 1 Aug 27 14:17:04.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f344b0 exit status 1 true [0x40011e2078 0x40011e2090 0x40011e20a8] [0x40011e2078 0x40011e2090 0x40011e20a8] [0x40011e2088 0x40011e20a0] [0x856db8 0x856db8] 0x4001e8eba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:17:14.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:17:15.986: INFO: rc: 1 Aug 27 14:17:15.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f34600 exit status 1 true [0x40011e20b0 0x40011e20c8 0x40011e20e0] [0x40011e20b0 0x40011e20c8 0x40011e20e0] [0x40011e20c0 0x40011e20d8] [0x856db8 0x856db8] 0x4001e8ee40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:17:25.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:17:27.339: INFO: rc: 1 Aug 27 14:17:27.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001722360 exit status 1 true [0x4001930050 0x4001930088 0x40019300b0] [0x4001930050 0x4001930088 0x40019300b0] [0x4001930080 0x40019300a8] [0x856db8 0x856db8] 0x4001b4cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:17:37.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:17:38.842: INFO: rc: 1 Aug 27 14:17:38.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f34750 exit status 1 true [0x40011e20e8 0x40011e2100 0x40011e2118] [0x40011e20e8 0x40011e2100 0x40011e2118] [0x40011e20f8 0x40011e2110] [0x856db8 0x856db8] 0x4001e8f0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:17:48.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:17:50.272: INFO: rc: 1 Aug 27 14:17:50.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001e96600 exit status 1 true [0x40002e6e88 0x40002e6ea8 0x40002e6ec8] [0x40002e6e88 0x40002e6ea8 0x40002e6ec8] [0x40002e6ea0 0x40002e6ec0] [0x856db8 0x856db8] 0x4001a84f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:18:00.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:18:01.975: INFO: rc: 1 Aug 27 14:18:01.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001722120 exit status 1 true [0x40002e6c98 0x40002e6ce8 0x40002e6d10] [0x40002e6c98 0x40002e6ce8 0x40002e6d10] [0x40002e6cd8 0x40002e6d00] [0x856db8 0x856db8] 0x4001a845a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:18:11.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:18:14.089: INFO: rc: 1 Aug 27 14:18:14.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4001f341b0 exit status 1 true [0x4001930000 0x4001930030 0x4001930048] [0x4001930000 0x4001930030 0x4001930048] [0x4001930028 0x4001930040] [0x856db8 0x856db8] 0x4001b4c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:18:24.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:18:25.269: INFO: rc: 1 Aug 27 14:18:25.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017222a0 exit status 1 true [0x40002e6d88 0x40002e6da8 0x40002e6de8] [0x40002e6d88 0x40002e6da8 0x40002e6de8] [0x40002e6da0 0x40002e6dc0] [0x856db8 0x856db8] 0x4001a849c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 27 14:18:35.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6sfdr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:18:36.468: INFO: rc: 1 Aug 27 14:18:36.468: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Aug 27 14:18:36.468: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 27 14:18:36.486: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6sfdr Aug 27 14:18:36.489: INFO: Scaling statefulset ss to 0 Aug 27 14:18:36.499: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:18:36.502: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:18:37.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6sfdr" for this suite. Aug 27 14:18:59.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:19:00.060: INFO: namespace: e2e-tests-statefulset-6sfdr, resource: bindings, ignored listing per whitelist Aug 27 14:19:00.285: INFO: namespace e2e-tests-statefulset-6sfdr deletion completed in 22.961119593s • [SLOW TEST:403.350 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:19:00.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-41e469e8-e870-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:19:03.903: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-l589z" to be "success or failure" Aug 27 14:19:04.337: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 433.328172ms Aug 27 14:19:06.747: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843845441s Aug 27 14:19:08.755: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851281173s Aug 27 14:19:10.763: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.859127466s Aug 27 14:19:13.058: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.154714358s Aug 27 14:19:15.268: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.3647755s STEP: Saw pod success Aug 27 14:19:15.269: INFO: Pod "pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:19:15.274: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d container projected-secret-volume-test: STEP: delete the pod Aug 27 14:19:15.669: INFO: Waiting for pod pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d to disappear Aug 27 14:19:15.723: INFO: Pod pod-projected-secrets-42151082-e870-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:19:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l589z" for this suite. Aug 27 14:19:26.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:19:26.284: INFO: namespace: e2e-tests-projected-l589z, resource: bindings, ignored listing per whitelist Aug 27 14:19:26.481: INFO: namespace e2e-tests-projected-l589z deletion completed in 10.620151987s • [SLOW TEST:26.195 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:19:26.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 27 14:19:27.272: INFO: Waiting up to 5m0s for pod "pod-507e416f-e870-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-k8zkc" to be "success or failure" Aug 27 14:19:27.284: INFO: Pod "pod-507e416f-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.764517ms Aug 27 14:19:29.291: INFO: Pod "pod-507e416f-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018447513s Aug 27 14:19:31.298: INFO: Pod "pod-507e416f-e870-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.025741675s Aug 27 14:19:33.305: INFO: Pod "pod-507e416f-e870-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033344343s STEP: Saw pod success Aug 27 14:19:33.306: INFO: Pod "pod-507e416f-e870-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:19:33.310: INFO: Trying to get logs from node hunter-worker pod pod-507e416f-e870-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:19:33.339: INFO: Waiting for pod pod-507e416f-e870-11ea-894e-0242ac11000d to disappear Aug 27 14:19:33.410: INFO: Pod pod-507e416f-e870-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:19:33.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k8zkc" for this suite. Aug 27 14:19:41.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:19:41.573: INFO: namespace: e2e-tests-emptydir-k8zkc, resource: bindings, ignored listing per whitelist Aug 27 14:19:41.635: INFO: namespace e2e-tests-emptydir-k8zkc deletion completed in 8.215831232s • [SLOW TEST:15.154 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:19:41.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 27 14:19:42.037: INFO: Waiting up to 5m0s for pod "pod-594b427d-e870-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-69vl6" to be "success or failure" Aug 27 14:19:42.082: INFO: Pod "pod-594b427d-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.723889ms Aug 27 14:19:44.251: INFO: Pod "pod-594b427d-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21361567s Aug 27 14:19:46.778: INFO: Pod "pod-594b427d-e870-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.741365596s STEP: Saw pod success Aug 27 14:19:46.779: INFO: Pod "pod-594b427d-e870-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:19:46.838: INFO: Trying to get logs from node hunter-worker2 pod pod-594b427d-e870-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:19:46.958: INFO: Waiting for pod pod-594b427d-e870-11ea-894e-0242ac11000d to disappear Aug 27 14:19:47.029: INFO: Pod pod-594b427d-e870-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:19:47.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-69vl6" for this suite. Aug 27 14:19:55.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:19:55.591: INFO: namespace: e2e-tests-emptydir-69vl6, resource: bindings, ignored listing per whitelist Aug 27 14:19:55.909: INFO: namespace e2e-tests-emptydir-69vl6 deletion completed in 8.867406792s • [SLOW TEST:14.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:19:55.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-61d21c31-e870-11ea-894e-0242ac11000d STEP: Creating configMap with name cm-test-opt-upd-61d2249b-e870-11ea-894e-0242ac11000d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-61d21c31-e870-11ea-894e-0242ac11000d STEP: Updating configmap cm-test-opt-upd-61d2249b-e870-11ea-894e-0242ac11000d STEP: Creating configMap with name cm-test-opt-create-61d22b6c-e870-11ea-894e-0242ac11000d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:21:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-64f95" for this suite. Aug 27 14:22:08.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:22:08.817: INFO: namespace: e2e-tests-configmap-64f95, resource: bindings, ignored listing per whitelist Aug 27 14:22:08.890: INFO: namespace e2e-tests-configmap-64f95 deletion completed in 22.148000956s • [SLOW TEST:132.980 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:22:08.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:22:09.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-vczxx" to be "success or failure" Aug 27 14:22:09.999: INFO: Pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 208.959031ms Aug 27 14:22:12.006: INFO: Pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215183051s Aug 27 14:22:14.011: INFO: Pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.220384404s Aug 27 14:22:16.017: INFO: Pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227038004s STEP: Saw pod success Aug 27 14:22:16.018: INFO: Pod "downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:22:16.022: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:22:16.063: INFO: Waiting for pod downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d to disappear Aug 27 14:22:16.094: INFO: Pod downwardapi-volume-b11343df-e870-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:22:16.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vczxx" for this suite. Aug 27 14:22:26.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:22:26.240: INFO: namespace: e2e-tests-projected-vczxx, resource: bindings, ignored listing per whitelist Aug 27 14:22:26.290: INFO: namespace e2e-tests-projected-vczxx deletion completed in 10.186885239s • [SLOW TEST:17.399 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:22:26.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:22:39.667: INFO: Waiting up to 5m0s for pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d" in namespace "e2e-tests-pods-4tsnk" to be "success or failure" Aug 27 14:22:39.969: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 302.040344ms Aug 27 14:22:41.990: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323049316s Aug 27 14:22:43.998: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330575191s Aug 27 14:22:46.221: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553633514s Aug 27 14:22:48.227: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.55977455s Aug 27 14:22:50.232: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.564317359s STEP: Saw pod success Aug 27 14:22:50.232: INFO: Pod "client-envvars-c325974a-e870-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:22:50.314: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-c325974a-e870-11ea-894e-0242ac11000d container env3cont: STEP: delete the pod Aug 27 14:22:51.940: INFO: Waiting for pod client-envvars-c325974a-e870-11ea-894e-0242ac11000d to disappear Aug 27 14:22:52.276: INFO: Pod client-envvars-c325974a-e870-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:22:52.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4tsnk" for this suite. Aug 27 14:23:34.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:23:35.280: INFO: namespace: e2e-tests-pods-4tsnk, resource: bindings, ignored listing per whitelist Aug 27 14:23:35.359: INFO: namespace e2e-tests-pods-4tsnk deletion completed in 43.042275938s • [SLOW TEST:69.069 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:23:35.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 27 14:23:37.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-l8qm2,SelfLink:/api/v1/namespaces/e2e-tests-watch-l8qm2/configmaps/e2e-watch-test-watch-closed,UID:e51051cc-e870-11ea-a485-0242ac120004,ResourceVersion:2646941,Generation:0,CreationTimestamp:2020-08-27 14:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 14:23:37.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-l8qm2,SelfLink:/api/v1/namespaces/e2e-tests-watch-l8qm2/configmaps/e2e-watch-test-watch-closed,UID:e51051cc-e870-11ea-a485-0242ac120004,ResourceVersion:2646944,Generation:0,CreationTimestamp:2020-08-27 14:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 27 14:23:37.242: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-l8qm2,SelfLink:/api/v1/namespaces/e2e-tests-watch-l8qm2/configmaps/e2e-watch-test-watch-closed,UID:e51051cc-e870-11ea-a485-0242ac120004,ResourceVersion:2646945,Generation:0,CreationTimestamp:2020-08-27 14:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 14:23:37.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-l8qm2,SelfLink:/api/v1/namespaces/e2e-tests-watch-l8qm2/configmaps/e2e-watch-test-watch-closed,UID:e51051cc-e870-11ea-a485-0242ac120004,ResourceVersion:2646946,Generation:0,CreationTimestamp:2020-08-27 14:23:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:23:37.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-l8qm2" for this suite. Aug 27 14:23:43.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:23:43.539: INFO: namespace: e2e-tests-watch-l8qm2, resource: bindings, ignored listing per whitelist Aug 27 14:23:43.650: INFO: namespace e2e-tests-watch-l8qm2 deletion completed in 6.344006681s • [SLOW TEST:8.290 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:23:43.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nhrpt.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nhrpt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nhrpt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-nhrpt.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-nhrpt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nhrpt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 27 14:23:52.127: INFO: DNS probes using e2e-tests-dns-nhrpt/dns-test-e97a8cfc-e870-11ea-894e-0242ac11000d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:23:52.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-nhrpt" for this suite. Aug 27 14:24:00.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:24:00.881: INFO: namespace: e2e-tests-dns-nhrpt, resource: bindings, ignored listing per whitelist Aug 27 14:24:00.941: INFO: namespace e2e-tests-dns-nhrpt deletion completed in 8.756022482s • [SLOW TEST:17.290 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:24:00.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:24:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-25gvv" for this suite. Aug 27 14:25:02.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:25:03.245: INFO: namespace: e2e-tests-kubelet-test-25gvv, resource: bindings, ignored listing per whitelist Aug 27 14:25:03.267: INFO: namespace e2e-tests-kubelet-test-25gvv deletion completed in 55.490000549s • [SLOW TEST:62.325 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:25:03.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-193b934d-e871-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:25:04.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-mxzjm" to be "success or failure" Aug 27 14:25:05.359: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 431.367381ms Aug 27 14:25:07.996: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.068363981s Aug 27 14:25:10.102: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.174418864s Aug 27 14:25:12.374: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.446517246s Aug 27 14:25:14.504: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 9.5760952s Aug 27 14:25:16.511: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.583305121s STEP: Saw pod success Aug 27 14:25:16.511: INFO: Pod "pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:25:16.537: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d container projected-secret-volume-test: STEP: delete the pod Aug 27 14:25:16.745: INFO: Waiting for pod pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d to disappear Aug 27 14:25:16.760: INFO: Pod pod-projected-secrets-1996b8d0-e871-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:25:16.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mxzjm" for this suite. Aug 27 14:25:28.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:25:29.906: INFO: namespace: e2e-tests-projected-mxzjm, resource: bindings, ignored listing per whitelist Aug 27 14:25:29.934: INFO: namespace e2e-tests-projected-mxzjm deletion completed in 13.166621424s • [SLOW TEST:26.666 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:25:29.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:25:32.680: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 27 14:25:32.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:33.258: INFO: Number of nodes with available pods: 0 Aug 27 14:25:33.258: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:34.295: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:34.541: INFO: Number of nodes with available pods: 0 Aug 27 14:25:34.541: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:35.489: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:35.750: INFO: Number of nodes with available pods: 0 Aug 27 14:25:35.750: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:36.266: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:36.270: INFO: Number of nodes with available pods: 0 Aug 27 14:25:36.270: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:37.305: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:37.798: INFO: Number of nodes with available pods: 0 Aug 27 14:25:37.798: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:38.984: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:39.553: INFO: Number of nodes with available pods: 0 Aug 27 14:25:39.553: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:40.479: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:40.484: INFO: Number of nodes with available pods: 0 Aug 27 14:25:40.484: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:41.435: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:41.441: INFO: Number of nodes with available pods: 0 Aug 27 14:25:41.441: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:42.283: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:42.408: INFO: Number of nodes with available pods: 0 Aug 27 14:25:42.408: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:43.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:43.457: INFO: Number of nodes with available pods: 0 Aug 27 14:25:43.457: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:25:44.384: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:44.714: INFO: Number of nodes with available pods: 2 Aug 27 14:25:44.714: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 27 14:25:44.979: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:44.979: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:45.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:46.060: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:46.060: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:46.069: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:47.059: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:47.059: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:47.070: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:48.336: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:48.336: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:48.343: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:49.098: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:49.098: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:49.109: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:50.057: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:50.057: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:50.058: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:50.067: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:51.057: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:51.057: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:51.058: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:51.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:52.069: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:52.070: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:52.070: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:52.078: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:53.058: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:53.059: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:53.059: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:53.068: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:54.057: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:54.057: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:54.057: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:54.063: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:55.093: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:55.093: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:55.093: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:55.358: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:56.187: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:56.187: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:56.187: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:56.195: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:57.181: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:57.181: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:57.181: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:57.189: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:58.058: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:58.058: INFO: Wrong image for pod: daemon-set-x5fwx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:58.058: INFO: Pod daemon-set-x5fwx is not available Aug 27 14:25:58.067: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:25:59.061: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:25:59.061: INFO: Pod daemon-set-pplmc is not available Aug 27 14:25:59.070: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:00.345: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:00.345: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:00.353: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:01.266: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:01.266: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:01.274: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:02.481: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:02.482: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:02.844: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:03.333: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:03.333: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:03.341: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:04.254: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:04.254: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:04.261: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:05.128: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:05.128: INFO: Pod daemon-set-pplmc is not available Aug 27 14:26:05.165: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:06.063: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:06.318: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:07.188: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:07.199: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:08.058: INFO: Wrong image for pod: daemon-set-gjncq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 27 14:26:08.059: INFO: Pod daemon-set-gjncq is not available Aug 27 14:26:08.070: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:09.059: INFO: Pod daemon-set-b74nf is not available Aug 27 14:26:09.067: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 27 14:26:09.075: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:09.081: INFO: Number of nodes with available pods: 1 Aug 27 14:26:09.081: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 14:26:10.111: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:10.191: INFO: Number of nodes with available pods: 1 Aug 27 14:26:10.191: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 14:26:11.092: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:11.100: INFO: Number of nodes with available pods: 1 Aug 27 14:26:11.100: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 14:26:12.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:12.099: INFO: Number of nodes with available pods: 1 Aug 27 14:26:12.099: INFO: Node hunter-worker2 is running more than one daemon pod Aug 27 14:26:13.094: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:26:13.099: INFO: Number of nodes with available pods: 2 Aug 27 14:26:13.099: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xn675, will wait for the garbage collector to delete the pods Aug 27 14:26:13.210: INFO: Deleting DaemonSet.extensions daemon-set took: 11.195768ms Aug 27 14:26:13.312: INFO: Terminating DaemonSet.extensions daemon-set pods took: 102.414054ms Aug 27 14:26:28.694: INFO: Number of nodes with available pods: 0 Aug 27 14:26:28.695: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 14:26:28.702: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xn675/daemonsets","resourceVersion":"2647435"},"items":null} Aug 27 14:26:28.711: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xn675/pods","resourceVersion":"2647435"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:26:28.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-xn675" for this suite. Aug 27 14:26:34.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:26:34.855: INFO: namespace: e2e-tests-daemonsets-xn675, resource: bindings, ignored listing per whitelist Aug 27 14:26:34.918: INFO: namespace e2e-tests-daemonsets-xn675 deletion completed in 6.178600727s • [SLOW TEST:64.983 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:26:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:26:35.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-8qb5r" to be "success or failure" Aug 27 14:26:35.151: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.414417ms Aug 27 14:26:37.289: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193369598s Aug 27 14:26:39.554: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45879816s Aug 27 14:26:41.756: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660900369s Aug 27 14:26:43.913: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817407423s Aug 27 14:26:45.920: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.824753652s STEP: Saw pod success Aug 27 14:26:45.920: INFO: Pod "downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:26:46.200: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:26:46.237: INFO: Waiting for pod downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d to disappear Aug 27 14:26:46.387: INFO: Pod downwardapi-volume-4f7d199b-e871-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:26:46.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8qb5r" for this suite. Aug 27 14:26:56.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:26:57.721: INFO: namespace: e2e-tests-downward-api-8qb5r, resource: bindings, ignored listing per whitelist Aug 27 14:26:57.738: INFO: namespace e2e-tests-downward-api-8qb5r deletion completed in 10.923190095s • [SLOW TEST:22.820 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:26:57.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 27 14:27:09.591: INFO: Successfully updated pod "labelsupdate5d6f407a-e871-11ea-894e-0242ac11000d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:27:12.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ffpnj" for this suite. Aug 27 14:27:34.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:27:34.414: INFO: namespace: e2e-tests-projected-ffpnj, resource: bindings, ignored listing per whitelist Aug 27 14:27:34.494: INFO: namespace e2e-tests-projected-ffpnj deletion completed in 22.374514821s • [SLOW TEST:36.755 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:27:34.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 27 14:27:34.659: INFO: Waiting up to 5m0s for pod "pod-72fde033-e871-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-62bpq" to be "success or failure" Aug 27 14:27:34.714: INFO: Pod "pod-72fde033-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.485291ms Aug 27 14:27:36.741: INFO: Pod "pod-72fde033-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082480132s Aug 27 14:27:38.748: INFO: Pod "pod-72fde033-e871-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.088943931s Aug 27 14:27:40.754: INFO: Pod "pod-72fde033-e871-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095386429s STEP: Saw pod success Aug 27 14:27:40.754: INFO: Pod "pod-72fde033-e871-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:27:40.759: INFO: Trying to get logs from node hunter-worker2 pod pod-72fde033-e871-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:27:41.040: INFO: Waiting for pod pod-72fde033-e871-11ea-894e-0242ac11000d to disappear Aug 27 14:27:41.077: INFO: Pod pod-72fde033-e871-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:27:41.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-62bpq" for this suite. Aug 27 14:27:53.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:27:54.755: INFO: namespace: e2e-tests-emptydir-62bpq, resource: bindings, ignored listing per whitelist Aug 27 14:27:54.768: INFO: namespace e2e-tests-emptydir-62bpq deletion completed in 13.682296753s • [SLOW TEST:20.273 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:27:54.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:28:56.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-c9554" for this suite. Aug 27 14:29:22.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:29:22.576: INFO: namespace: e2e-tests-container-probe-c9554, resource: bindings, ignored listing per whitelist Aug 27 14:29:22.579: INFO: namespace e2e-tests-container-probe-c9554 deletion completed in 26.477084487s • [SLOW TEST:87.811 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:29:22.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 27 14:29:26.878: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-b370e2a4-e871-11ea-894e-0242ac11000d", GenerateName:"", Namespace:"e2e-tests-pods-25822", SelfLink:"/api/v1/namespaces/e2e-tests-pods-25822/pods/pod-submit-remove-b370e2a4-e871-11ea-894e-0242ac11000d", UID:"b374549d-e871-11ea-a485-0242ac120004", ResourceVersion:"2647909", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734135362, loc:(*time.Location)(0x70e3ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"769089126"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-djlmm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4002131f40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djlmm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40022b0d88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40014ed4a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40022b0dd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40022b0df0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40022b0df8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40022b0dfc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135362, loc:(*time.Location)(0x70e3ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135366, loc:(*time.Location)(0x70e3ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135366, loc:(*time.Location)(0x70e3ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135362, loc:(*time.Location)(0x70e3ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.29", StartTime:(*v1.Time)(0x40022ba100), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x40022ba120), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://a2987b98dd053103418cac047d0ae7b49d28db40a13016dfee8b38a5aa48fb92"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:29:38.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-25822" for this suite. Aug 27 14:29:44.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:29:44.194: INFO: namespace: e2e-tests-pods-25822, resource: bindings, ignored listing per whitelist Aug 27 14:29:44.282: INFO: namespace e2e-tests-pods-25822 deletion completed in 6.178132075s • [SLOW TEST:21.701 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:29:44.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 27 14:29:44.453: INFO: PodSpec: initContainers in spec.initContainers Aug 27 14:30:38.726: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c05d71c9-e871-11ea-894e-0242ac11000d", GenerateName:"", Namespace:"e2e-tests-init-container-qwb6j", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-qwb6j/pods/pod-init-c05d71c9-e871-11ea-894e-0242ac11000d", UID:"c0601d27-e871-11ea-a485-0242ac120004", ResourceVersion:"2648090", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734135384, loc:(*time.Location)(0x70e3ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"452082210"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tx6mv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x40015bc580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tx6mv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tx6mv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tx6mv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f9b7c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40017ebc80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4000f9b910)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4000f9b930)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4000f9b938), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4000f9b93c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135384, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135384, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135384, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734135384, loc:(*time.Location)(0x70e3ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.30", StartTime:(*v1.Time)(0x4000b9d640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4000b9d680), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4000513420)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://240651409697759c52378ab78723a01f223419379c7a548abacddb652ccf1559"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4000b9d6a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4000b9d660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:30:38.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-qwb6j" for this suite. Aug 27 14:31:05.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:31:05.661: INFO: namespace: e2e-tests-init-container-qwb6j, resource: bindings, ignored listing per whitelist Aug 27 14:31:06.235: INFO: namespace e2e-tests-init-container-qwb6j deletion completed in 27.284007504s • [SLOW TEST:81.953 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:31:06.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Aug 27 14:31:06.545: INFO: Waiting up to 5m0s for pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d" in namespace "e2e-tests-containers-2km6v" to be "success or failure" Aug 27 14:31:06.641: INFO: Pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 95.498424ms Aug 27 14:31:08.648: INFO: Pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102430357s Aug 27 14:31:10.653: INFO: Pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.108207337s Aug 27 14:31:12.659: INFO: Pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113447592s STEP: Saw pod success Aug 27 14:31:12.659: INFO: Pod "client-containers-f14a689d-e871-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:31:12.706: INFO: Trying to get logs from node hunter-worker2 pod client-containers-f14a689d-e871-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:31:12.837: INFO: Waiting for pod client-containers-f14a689d-e871-11ea-894e-0242ac11000d to disappear Aug 27 14:31:13.111: INFO: Pod client-containers-f14a689d-e871-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:31:13.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-2km6v" for this suite. Aug 27 14:31:20.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:31:20.497: INFO: namespace: e2e-tests-containers-2km6v, resource: bindings, ignored listing per whitelist Aug 27 14:31:20.695: INFO: namespace e2e-tests-containers-2km6v deletion completed in 7.161714081s • [SLOW TEST:14.459 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:31:20.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 27 14:31:21.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:21.655: INFO: Number of nodes with available pods: 0 Aug 27 14:31:21.655: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:22.684: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:22.700: INFO: Number of nodes with available pods: 0 Aug 27 14:31:22.701: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:23.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:23.667: INFO: Number of nodes with available pods: 0 Aug 27 14:31:23.667: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:24.666: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:24.672: INFO: Number of nodes with available pods: 0 Aug 27 14:31:24.672: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:25.876: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:25.884: INFO: Number of nodes with available pods: 0 Aug 27 14:31:25.884: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:26.683: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:26.689: INFO: Number of nodes with available pods: 0 Aug 27 14:31:26.689: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:27.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:27.669: INFO: Number of nodes with available pods: 2 Aug 27 14:31:27.669: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 27 14:31:28.204: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:28.227: INFO: Number of nodes with available pods: 1 Aug 27 14:31:28.228: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:29.239: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:29.245: INFO: Number of nodes with available pods: 1 Aug 27 14:31:29.245: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:30.331: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:30.658: INFO: Number of nodes with available pods: 1 Aug 27 14:31:30.658: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:31.239: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:31.245: INFO: Number of nodes with available pods: 1 Aug 27 14:31:31.245: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:32.239: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:32.245: INFO: Number of nodes with available pods: 1 Aug 27 14:31:32.245: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:31:33.236: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 14:31:33.240: INFO: Number of nodes with available pods: 2 Aug 27 14:31:33.240: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-fk5rv, will wait for the garbage collector to delete the pods Aug 27 14:31:33.311: INFO: Deleting DaemonSet.extensions daemon-set took: 8.348299ms Aug 27 14:31:34.213: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.681232ms Aug 27 14:31:48.419: INFO: Number of nodes with available pods: 0 Aug 27 14:31:48.419: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 14:31:48.424: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-fk5rv/daemonsets","resourceVersion":"2648317"},"items":null} Aug 27 14:31:48.428: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-fk5rv/pods","resourceVersion":"2648317"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:31:48.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-fk5rv" for this suite. Aug 27 14:31:54.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:31:54.736: INFO: namespace: e2e-tests-daemonsets-fk5rv, resource: bindings, ignored listing per whitelist Aug 27 14:31:54.740: INFO: namespace e2e-tests-daemonsets-fk5rv deletion completed in 6.28492751s • [SLOW TEST:34.043 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:31:54.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xkxnw STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 27 14:31:54.948: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 27 14:32:19.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.33:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xkxnw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 14:32:19.408: INFO: >>> kubeConfig: /root/.kube/config I0827 14:32:19.502729 7 log.go:172] (0x40003beb00) (0x4001975040) Create stream I0827 14:32:19.503351 7 log.go:172] (0x40003beb00) (0x4001975040) Stream added, broadcasting: 1 I0827 14:32:19.522982 7 log.go:172] (0x40003beb00) Reply frame received for 1 I0827 14:32:19.523759 7 log.go:172] (0x40003beb00) (0x40019750e0) Create stream I0827 14:32:19.523855 7 log.go:172] (0x40003beb00) (0x40019750e0) Stream added, broadcasting: 3 I0827 14:32:19.525988 7 log.go:172] (0x40003beb00) Reply frame received for 3 I0827 14:32:19.526282 7 log.go:172] (0x40003beb00) (0x400098d180) Create stream I0827 14:32:19.526350 7 log.go:172] (0x40003beb00) (0x400098d180) Stream added, broadcasting: 5 I0827 14:32:19.527668 7 log.go:172] (0x40003beb00) Reply frame received for 5 I0827 14:32:19.592408 7 log.go:172] (0x40003beb00) Data frame received for 3 I0827 14:32:19.592876 7 log.go:172] (0x40003beb00) Data frame received for 1 I0827 14:32:19.593043 7 log.go:172] (0x4001975040) (1) Data frame handling I0827 14:32:19.593280 7 log.go:172] (0x40019750e0) (3) Data frame handling I0827 14:32:19.593457 7 log.go:172] (0x40003beb00) Data frame received for 5 I0827 14:32:19.593609 7 log.go:172] (0x400098d180) (5) Data frame handling I0827 14:32:19.594396 7 log.go:172] (0x40019750e0) (3) Data frame sent I0827 14:32:19.594587 7 log.go:172] (0x4001975040) (1) Data frame sent I0827 14:32:19.595136 7 log.go:172] (0x40003beb00) Data frame received for 3 I0827 14:32:19.595251 7 log.go:172] (0x40019750e0) (3) Data frame handling I0827 14:32:19.598347 7 log.go:172] (0x40003beb00) (0x4001975040) Stream removed, broadcasting: 1 I0827 14:32:19.598788 7 log.go:172] (0x40003beb00) Go away received I0827 14:32:19.600840 7 log.go:172] (0x40003beb00) (0x4001975040) Stream removed, broadcasting: 1 I0827 14:32:19.601553 7 log.go:172] (0x40003beb00) (0x40019750e0) Stream removed, broadcasting: 3 I0827 14:32:19.601848 7 log.go:172] (0x40003beb00) (0x400098d180) Stream removed, broadcasting: 5 Aug 27 14:32:19.603: INFO: Found all expected endpoints: [netserver-0] Aug 27 14:32:19.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.196:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xkxnw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 14:32:19.609: INFO: >>> kubeConfig: /root/.kube/config I0827 14:32:19.663316 7 log.go:172] (0x4001cc8370) (0x4000503e00) Create stream I0827 14:32:19.663448 7 log.go:172] (0x4001cc8370) (0x4000503e00) Stream added, broadcasting: 1 I0827 14:32:19.666752 7 log.go:172] (0x4001cc8370) Reply frame received for 1 I0827 14:32:19.666957 7 log.go:172] (0x4001cc8370) (0x4001d986e0) Create stream I0827 14:32:19.667065 7 log.go:172] (0x4001cc8370) (0x4001d986e0) Stream added, broadcasting: 3 I0827 14:32:19.669072 7 log.go:172] (0x4001cc8370) Reply frame received for 3 I0827 14:32:19.669195 7 log.go:172] (0x4001cc8370) (0x4000503ea0) Create stream I0827 14:32:19.669259 7 log.go:172] (0x4001cc8370) (0x4000503ea0) Stream added, broadcasting: 5 I0827 14:32:19.670772 7 log.go:172] (0x4001cc8370) Reply frame received for 5 I0827 14:32:19.750287 7 log.go:172] (0x4001cc8370) Data frame received for 5 I0827 14:32:19.750482 7 log.go:172] (0x4000503ea0) (5) Data frame handling I0827 14:32:19.750583 7 log.go:172] (0x4001cc8370) Data frame received for 3 I0827 14:32:19.750715 7 log.go:172] (0x4001d986e0) (3) Data frame handling I0827 14:32:19.750836 7 log.go:172] (0x4001d986e0) (3) Data frame sent I0827 14:32:19.750937 7 log.go:172] (0x4001cc8370) Data frame received for 3 I0827 14:32:19.751025 7 log.go:172] (0x4001d986e0) (3) Data frame handling I0827 14:32:19.752102 7 log.go:172] (0x4001cc8370) Data frame received for 1 I0827 14:32:19.752268 7 log.go:172] (0x4000503e00) (1) Data frame handling I0827 14:32:19.752437 7 log.go:172] (0x4000503e00) (1) Data frame sent I0827 14:32:19.752604 7 log.go:172] (0x4001cc8370) (0x4000503e00) Stream removed, broadcasting: 1 I0827 14:32:19.752906 7 log.go:172] (0x4001cc8370) Go away received I0827 14:32:19.753285 7 log.go:172] (0x4001cc8370) (0x4000503e00) Stream removed, broadcasting: 1 I0827 14:32:19.753399 7 log.go:172] (0x4001cc8370) (0x4001d986e0) Stream removed, broadcasting: 3 I0827 14:32:19.753495 7 log.go:172] (0x4001cc8370) (0x4000503ea0) Stream removed, broadcasting: 5 Aug 27 14:32:19.753: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:32:19.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xkxnw" for this suite. Aug 27 14:32:41.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:32:41.862: INFO: namespace: e2e-tests-pod-network-test-xkxnw, resource: bindings, ignored listing per whitelist Aug 27 14:32:41.951: INFO: namespace e2e-tests-pod-network-test-xkxnw deletion completed in 22.188879573s • [SLOW TEST:47.210 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:32:41.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2a47269b-e872-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:32:42.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-khsj8" to be "success or failure" Aug 27 14:32:42.329: INFO: Pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 120.962491ms Aug 27 14:32:44.336: INFO: Pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127635257s Aug 27 14:32:46.343: INFO: Pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.134127045s Aug 27 14:32:48.349: INFO: Pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141073343s STEP: Saw pod success Aug 27 14:32:48.350: INFO: Pod "pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:32:48.355: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 14:32:48.400: INFO: Waiting for pod pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d to disappear Aug 27 14:32:48.411: INFO: Pod pod-configmaps-2a47e1a5-e872-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:32:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-khsj8" for this suite. Aug 27 14:32:54.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:32:54.562: INFO: namespace: e2e-tests-configmap-khsj8, resource: bindings, ignored listing per whitelist Aug 27 14:32:54.581: INFO: namespace e2e-tests-configmap-khsj8 deletion completed in 6.159894027s • [SLOW TEST:12.628 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:32:54.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-31da43a1-e872-11ea-894e-0242ac11000d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-31da43a1-e872-11ea-894e-0242ac11000d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:34:19.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qwntw" for this suite. Aug 27 14:34:41.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:34:41.398: INFO: namespace: e2e-tests-projected-qwntw, resource: bindings, ignored listing per whitelist Aug 27 14:34:41.464: INFO: namespace e2e-tests-projected-qwntw deletion completed in 22.194636757s • [SLOW TEST:106.882 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:34:41.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tkpgh [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-tkpgh STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-tkpgh Aug 27 14:34:41.747: INFO: Found 0 stateful pods, waiting for 1 Aug 27 14:34:51.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 27 14:34:51.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:34:53.393: INFO: stderr: "I0827 14:34:53.199270 1228 log.go:172] (0x4000166840) (0x40007b0640) Create stream\nI0827 14:34:53.202208 1228 log.go:172] (0x4000166840) (0x40007b0640) Stream added, broadcasting: 1\nI0827 14:34:53.212878 1228 log.go:172] (0x4000166840) Reply frame received for 1\nI0827 14:34:53.213536 1228 log.go:172] (0x4000166840) (0x40007b06e0) Create stream\nI0827 14:34:53.213642 1228 log.go:172] (0x4000166840) (0x40007b06e0) Stream added, broadcasting: 3\nI0827 14:34:53.215329 1228 log.go:172] (0x4000166840) Reply frame received for 3\nI0827 14:34:53.215612 1228 log.go:172] (0x4000166840) (0x400067abe0) Create stream\nI0827 14:34:53.215714 1228 log.go:172] (0x4000166840) (0x400067abe0) Stream added, broadcasting: 5\nI0827 14:34:53.217168 1228 log.go:172] (0x4000166840) Reply frame received for 5\nI0827 14:34:53.370238 1228 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:34:53.370478 1228 log.go:172] (0x40007b06e0) (3) Data frame handling\nI0827 14:34:53.370950 1228 log.go:172] (0x40007b06e0) (3) Data frame sent\nI0827 14:34:53.371814 1228 log.go:172] (0x4000166840) Data frame received for 5\nI0827 14:34:53.371977 1228 log.go:172] (0x400067abe0) (5) Data frame handling\nI0827 14:34:53.373147 1228 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:34:53.373262 1228 log.go:172] (0x40007b06e0) (3) Data frame handling\nI0827 14:34:53.373368 1228 log.go:172] (0x4000166840) Data frame received for 1\nI0827 14:34:53.373517 1228 log.go:172] (0x40007b0640) (1) Data frame handling\nI0827 14:34:53.373631 1228 log.go:172] (0x40007b0640) (1) Data frame sent\nI0827 14:34:53.374678 1228 log.go:172] (0x4000166840) (0x40007b0640) Stream removed, broadcasting: 1\nI0827 14:34:53.377668 1228 log.go:172] (0x4000166840) Go away received\nI0827 14:34:53.378694 1228 log.go:172] (0x4000166840) (0x40007b0640) Stream removed, broadcasting: 1\nI0827 14:34:53.379077 1228 log.go:172] (0x4000166840) (0x40007b06e0) Stream removed, broadcasting: 3\nI0827 14:34:53.379334 1228 log.go:172] (0x4000166840) (0x400067abe0) Stream removed, broadcasting: 5\n" Aug 27 14:34:53.393: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:34:53.394: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:34:53.434: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 27 14:35:03.528: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:35:03.528: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:35:03.625: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:03.626: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:03.626: INFO: ss-1 Pending [] Aug 27 14:35:03.626: INFO: Aug 27 14:35:03.626: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 27 14:35:04.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.916513381s Aug 27 14:35:05.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.907168801s Aug 27 14:35:06.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.780963489s Aug 27 14:35:07.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.595489029s Aug 27 14:35:08.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.587419095s Aug 27 14:35:09.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.579087775s Aug 27 14:35:10.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.571019327s Aug 27 14:35:11.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.561742063s Aug 27 14:35:13.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 553.0273ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-tkpgh Aug 27 14:35:14.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:35:15.437: INFO: stderr: "I0827 14:35:15.298245 1253 log.go:172] (0x400015a840) (0x4000669220) Create stream\nI0827 14:35:15.304194 1253 log.go:172] (0x400015a840) (0x4000669220) Stream added, broadcasting: 1\nI0827 14:35:15.315742 1253 log.go:172] (0x400015a840) Reply frame received for 1\nI0827 14:35:15.316664 1253 log.go:172] (0x400015a840) (0x4000792000) Create stream\nI0827 14:35:15.316850 1253 log.go:172] (0x400015a840) (0x4000792000) Stream added, broadcasting: 3\nI0827 14:35:15.318460 1253 log.go:172] (0x400015a840) Reply frame received for 3\nI0827 14:35:15.318659 1253 log.go:172] (0x400015a840) (0x4000792140) Create stream\nI0827 14:35:15.318717 1253 log.go:172] (0x400015a840) (0x4000792140) Stream added, broadcasting: 5\nI0827 14:35:15.320014 1253 log.go:172] (0x400015a840) Reply frame received for 5\nI0827 14:35:15.410916 1253 log.go:172] (0x400015a840) Data frame received for 3\nI0827 14:35:15.411197 1253 log.go:172] (0x400015a840) Data frame received for 1\nI0827 14:35:15.411464 1253 log.go:172] (0x400015a840) Data frame received for 5\nI0827 14:35:15.411928 1253 log.go:172] (0x4000792140) (5) Data frame handling\nI0827 14:35:15.412029 1253 log.go:172] (0x4000792000) (3) Data frame handling\nI0827 14:35:15.412186 1253 log.go:172] (0x4000669220) (1) Data frame handling\nI0827 14:35:15.414413 1253 log.go:172] (0x4000792000) (3) Data frame sent\nI0827 14:35:15.415189 1253 log.go:172] (0x400015a840) Data frame received for 3\nI0827 14:35:15.415297 1253 log.go:172] (0x4000792000) (3) Data frame handling\nI0827 14:35:15.415500 1253 log.go:172] (0x4000669220) (1) Data frame sent\nI0827 14:35:15.417459 1253 log.go:172] (0x400015a840) (0x4000669220) Stream removed, broadcasting: 1\nI0827 14:35:15.419107 1253 log.go:172] (0x400015a840) Go away received\nI0827 14:35:15.421895 1253 log.go:172] (0x400015a840) (0x4000669220) Stream removed, broadcasting: 1\nI0827 14:35:15.422375 1253 log.go:172] (0x400015a840) (0x4000792000) Stream removed, broadcasting: 3\nI0827 14:35:15.422563 1253 log.go:172] (0x400015a840) (0x4000792140) Stream removed, broadcasting: 5\n" Aug 27 14:35:15.438: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:35:15.438: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:35:15.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:35:17.264: INFO: stderr: "I0827 14:35:17.154680 1276 log.go:172] (0x400018c840) (0x40007b6640) Create stream\nI0827 14:35:17.157530 1276 log.go:172] (0x400018c840) (0x40007b6640) Stream added, broadcasting: 1\nI0827 14:35:17.168560 1276 log.go:172] (0x400018c840) Reply frame received for 1\nI0827 14:35:17.169181 1276 log.go:172] (0x400018c840) (0x40007b66e0) Create stream\nI0827 14:35:17.169254 1276 log.go:172] (0x400018c840) (0x40007b66e0) Stream added, broadcasting: 3\nI0827 14:35:17.171305 1276 log.go:172] (0x400018c840) Reply frame received for 3\nI0827 14:35:17.171734 1276 log.go:172] (0x400018c840) (0x400066edc0) Create stream\nI0827 14:35:17.171845 1276 log.go:172] (0x400018c840) (0x400066edc0) Stream added, broadcasting: 5\nI0827 14:35:17.173644 1276 log.go:172] (0x400018c840) Reply frame received for 5\nI0827 14:35:17.238799 1276 log.go:172] (0x400018c840) Data frame received for 3\nI0827 14:35:17.239053 1276 log.go:172] (0x400018c840) Data frame received for 1\nI0827 14:35:17.239291 1276 log.go:172] (0x400018c840) Data frame received for 5\nI0827 14:35:17.239498 1276 log.go:172] (0x40007b66e0) (3) Data frame handling\nI0827 14:35:17.239687 1276 log.go:172] (0x40007b6640) (1) Data frame handling\nI0827 14:35:17.239846 1276 log.go:172] (0x400066edc0) (5) Data frame handling\nI0827 14:35:17.241502 1276 log.go:172] (0x40007b66e0) (3) Data frame sent\nI0827 14:35:17.241685 1276 log.go:172] (0x400066edc0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0827 14:35:17.242339 1276 log.go:172] (0x40007b6640) (1) Data frame sent\nI0827 14:35:17.242431 1276 log.go:172] (0x400018c840) Data frame received for 5\nI0827 14:35:17.242511 1276 log.go:172] (0x400066edc0) (5) Data frame handling\nI0827 14:35:17.242740 1276 log.go:172] (0x400018c840) Data frame received for 3\nI0827 14:35:17.242827 1276 log.go:172] (0x40007b66e0) (3) Data frame handling\nI0827 14:35:17.243835 1276 log.go:172] (0x400018c840) (0x40007b6640) Stream removed, broadcasting: 1\nI0827 14:35:17.248347 1276 log.go:172] (0x400018c840) Go away received\nI0827 14:35:17.250239 1276 log.go:172] (0x400018c840) (0x40007b6640) Stream removed, broadcasting: 1\nI0827 14:35:17.251240 1276 log.go:172] (0x400018c840) (0x40007b66e0) Stream removed, broadcasting: 3\nI0827 14:35:17.252206 1276 log.go:172] (0x400018c840) (0x400066edc0) Stream removed, broadcasting: 5\n" Aug 27 14:35:17.265: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:35:17.265: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:35:17.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:35:18.673: INFO: stderr: "I0827 14:35:18.575938 1300 log.go:172] (0x400013a840) (0x400071a640) Create stream\nI0827 14:35:18.580817 1300 log.go:172] (0x400013a840) (0x400071a640) Stream added, broadcasting: 1\nI0827 14:35:18.590086 1300 log.go:172] (0x400013a840) Reply frame received for 1\nI0827 14:35:18.590573 1300 log.go:172] (0x400013a840) (0x4000682e60) Create stream\nI0827 14:35:18.590650 1300 log.go:172] (0x400013a840) (0x4000682e60) Stream added, broadcasting: 3\nI0827 14:35:18.592867 1300 log.go:172] (0x400013a840) Reply frame received for 3\nI0827 14:35:18.593394 1300 log.go:172] (0x400013a840) (0x4000660000) Create stream\nI0827 14:35:18.593546 1300 log.go:172] (0x400013a840) (0x4000660000) Stream added, broadcasting: 5\nI0827 14:35:18.595276 1300 log.go:172] (0x400013a840) Reply frame received for 5\nI0827 14:35:18.653146 1300 log.go:172] (0x400013a840) Data frame received for 5\nI0827 14:35:18.653473 1300 log.go:172] (0x400013a840) Data frame received for 1\nI0827 14:35:18.653634 1300 log.go:172] (0x400013a840) Data frame received for 3\nI0827 14:35:18.653836 1300 log.go:172] (0x400071a640) (1) Data frame handling\nI0827 14:35:18.654224 1300 log.go:172] (0x4000660000) (5) Data frame handling\nI0827 14:35:18.654377 1300 log.go:172] (0x4000682e60) (3) Data frame handling\nI0827 14:35:18.655053 1300 log.go:172] (0x4000682e60) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0827 14:35:18.655586 1300 log.go:172] (0x400071a640) (1) Data frame sent\nI0827 14:35:18.655766 1300 log.go:172] (0x4000660000) (5) Data frame sent\nI0827 14:35:18.656348 1300 log.go:172] (0x400013a840) Data frame received for 3\nI0827 14:35:18.656410 1300 log.go:172] (0x4000682e60) (3) Data frame handling\nI0827 14:35:18.656888 1300 log.go:172] (0x400013a840) Data frame received for 5\nI0827 14:35:18.656976 1300 log.go:172] (0x4000660000) (5) Data frame handling\nI0827 14:35:18.658861 1300 log.go:172] (0x400013a840) (0x400071a640) Stream removed, broadcasting: 1\nI0827 14:35:18.659308 1300 log.go:172] (0x400013a840) Go away received\nI0827 14:35:18.661467 1300 log.go:172] (0x400013a840) (0x400071a640) Stream removed, broadcasting: 1\nI0827 14:35:18.661857 1300 log.go:172] (0x400013a840) (0x4000682e60) Stream removed, broadcasting: 3\nI0827 14:35:18.662255 1300 log.go:172] (0x400013a840) (0x4000660000) Stream removed, broadcasting: 5\n" Aug 27 14:35:18.674: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 27 14:35:18.674: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 27 14:35:18.701: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:35:18.701: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 27 14:35:18.701: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 27 14:35:18.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:35:20.175: INFO: stderr: "I0827 14:35:20.035303 1324 log.go:172] (0x4000166840) (0x40007b4640) Create stream\nI0827 14:35:20.037841 1324 log.go:172] (0x4000166840) (0x40007b4640) Stream added, broadcasting: 1\nI0827 14:35:20.053468 1324 log.go:172] (0x4000166840) Reply frame received for 1\nI0827 14:35:20.054574 1324 log.go:172] (0x4000166840) (0x40007b46e0) Create stream\nI0827 14:35:20.054680 1324 log.go:172] (0x4000166840) (0x40007b46e0) Stream added, broadcasting: 3\nI0827 14:35:20.057526 1324 log.go:172] (0x4000166840) Reply frame received for 3\nI0827 14:35:20.058037 1324 log.go:172] (0x4000166840) (0x4000656e60) Create stream\nI0827 14:35:20.058172 1324 log.go:172] (0x4000166840) (0x4000656e60) Stream added, broadcasting: 5\nI0827 14:35:20.060208 1324 log.go:172] (0x4000166840) Reply frame received for 5\nI0827 14:35:20.155802 1324 log.go:172] (0x4000166840) Data frame received for 5\nI0827 14:35:20.156074 1324 log.go:172] (0x4000166840) Data frame received for 1\nI0827 14:35:20.156208 1324 log.go:172] (0x40007b4640) (1) Data frame handling\nI0827 14:35:20.156302 1324 log.go:172] (0x4000656e60) (5) Data frame handling\nI0827 14:35:20.157176 1324 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:35:20.157347 1324 log.go:172] (0x40007b46e0) (3) Data frame handling\nI0827 14:35:20.159540 1324 log.go:172] (0x40007b4640) (1) Data frame sent\nI0827 14:35:20.159850 1324 log.go:172] (0x40007b46e0) (3) Data frame sent\nI0827 14:35:20.159955 1324 log.go:172] (0x4000166840) Data frame received for 3\nI0827 14:35:20.160255 1324 log.go:172] (0x4000166840) (0x40007b4640) Stream removed, broadcasting: 1\nI0827 14:35:20.160689 1324 log.go:172] (0x40007b46e0) (3) Data frame handling\nI0827 14:35:20.162880 1324 log.go:172] (0x4000166840) Go away received\nI0827 14:35:20.164460 1324 log.go:172] (0x4000166840) (0x40007b4640) Stream removed, broadcasting: 1\nI0827 14:35:20.164945 1324 log.go:172] (0x4000166840) (0x40007b46e0) Stream removed, broadcasting: 3\nI0827 14:35:20.165318 1324 log.go:172] (0x4000166840) (0x4000656e60) Stream removed, broadcasting: 5\n" Aug 27 14:35:20.176: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:35:20.176: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:35:20.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:35:21.591: INFO: stderr: "I0827 14:35:21.477004 1347 log.go:172] (0x400015c840) (0x40005cb360) Create stream\nI0827 14:35:21.480657 1347 log.go:172] (0x400015c840) (0x40005cb360) Stream added, broadcasting: 1\nI0827 14:35:21.493110 1347 log.go:172] (0x400015c840) Reply frame received for 1\nI0827 14:35:21.493688 1347 log.go:172] (0x400015c840) (0x400078c000) Create stream\nI0827 14:35:21.493762 1347 log.go:172] (0x400015c840) (0x400078c000) Stream added, broadcasting: 3\nI0827 14:35:21.495817 1347 log.go:172] (0x400015c840) Reply frame received for 3\nI0827 14:35:21.496236 1347 log.go:172] (0x400015c840) (0x40005cb400) Create stream\nI0827 14:35:21.496354 1347 log.go:172] (0x400015c840) (0x40005cb400) Stream added, broadcasting: 5\nI0827 14:35:21.497815 1347 log.go:172] (0x400015c840) Reply frame received for 5\nI0827 14:35:21.575487 1347 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:35:21.575658 1347 log.go:172] (0x400015c840) Data frame received for 5\nI0827 14:35:21.575792 1347 log.go:172] (0x40005cb400) (5) Data frame handling\nI0827 14:35:21.575853 1347 log.go:172] (0x400078c000) (3) Data frame handling\nI0827 14:35:21.576671 1347 log.go:172] (0x400015c840) Data frame received for 1\nI0827 14:35:21.576834 1347 log.go:172] (0x40005cb360) (1) Data frame handling\nI0827 14:35:21.577432 1347 log.go:172] (0x400078c000) (3) Data frame sent\nI0827 14:35:21.577566 1347 log.go:172] (0x400015c840) Data frame received for 3\nI0827 14:35:21.577630 1347 log.go:172] (0x400078c000) (3) Data frame handling\nI0827 14:35:21.577983 1347 log.go:172] (0x40005cb360) (1) Data frame sent\nI0827 14:35:21.578893 1347 log.go:172] (0x400015c840) (0x40005cb360) Stream removed, broadcasting: 1\nI0827 14:35:21.580878 1347 log.go:172] (0x400015c840) Go away received\nI0827 14:35:21.582871 1347 log.go:172] (0x400015c840) (0x40005cb360) Stream removed, broadcasting: 1\nI0827 14:35:21.583108 1347 log.go:172] (0x400015c840) (0x400078c000) Stream removed, broadcasting: 3\nI0827 14:35:21.583271 1347 log.go:172] (0x400015c840) (0x40005cb400) Stream removed, broadcasting: 5\n" Aug 27 14:35:21.592: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:35:21.592: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:35:21.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 27 14:35:23.048: INFO: stderr: "I0827 14:35:22.905300 1371 log.go:172] (0x400074e370) (0x400075a640) Create stream\nI0827 14:35:22.907871 1371 log.go:172] (0x400074e370) (0x400075a640) Stream added, broadcasting: 1\nI0827 14:35:22.921374 1371 log.go:172] (0x400074e370) Reply frame received for 1\nI0827 14:35:22.922527 1371 log.go:172] (0x400074e370) (0x400075a6e0) Create stream\nI0827 14:35:22.922648 1371 log.go:172] (0x400074e370) (0x400075a6e0) Stream added, broadcasting: 3\nI0827 14:35:22.924650 1371 log.go:172] (0x400074e370) Reply frame received for 3\nI0827 14:35:22.925051 1371 log.go:172] (0x400074e370) (0x400064ac80) Create stream\nI0827 14:35:22.925175 1371 log.go:172] (0x400074e370) (0x400064ac80) Stream added, broadcasting: 5\nI0827 14:35:22.926507 1371 log.go:172] (0x400074e370) Reply frame received for 5\nI0827 14:35:23.020676 1371 log.go:172] (0x400074e370) Data frame received for 3\nI0827 14:35:23.021472 1371 log.go:172] (0x400074e370) Data frame received for 5\nI0827 14:35:23.021803 1371 log.go:172] (0x400074e370) Data frame received for 1\nI0827 14:35:23.022038 1371 log.go:172] (0x400075a640) (1) Data frame handling\nI0827 14:35:23.022217 1371 log.go:172] (0x400064ac80) (5) Data frame handling\nI0827 14:35:23.022475 1371 log.go:172] (0x400075a6e0) (3) Data frame handling\nI0827 14:35:23.023070 1371 log.go:172] (0x400075a640) (1) Data frame sent\nI0827 14:35:23.024180 1371 log.go:172] (0x400075a6e0) (3) Data frame sent\nI0827 14:35:23.024652 1371 log.go:172] (0x400074e370) Data frame received for 3\nI0827 14:35:23.024920 1371 log.go:172] (0x400075a6e0) (3) Data frame handling\nI0827 14:35:23.029178 1371 log.go:172] (0x400074e370) (0x400075a640) Stream removed, broadcasting: 1\nI0827 14:35:23.030203 1371 log.go:172] (0x400074e370) Go away received\nI0827 14:35:23.032718 1371 log.go:172] (0x400074e370) (0x400075a640) Stream removed, broadcasting: 1\nI0827 14:35:23.033163 1371 log.go:172] (0x400074e370) (0x400075a6e0) Stream removed, broadcasting: 3\nI0827 14:35:23.033373 1371 log.go:172] (0x400074e370) (0x400064ac80) Stream removed, broadcasting: 5\n" Aug 27 14:35:23.049: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 27 14:35:23.049: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 27 14:35:23.049: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:35:23.054: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 27 14:35:33.069: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:35:33.070: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:35:33.070: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 27 14:35:33.093: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:33.093: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:33.093: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:33.094: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:33.094: INFO: Aug 27 14:35:33.094: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 27 14:35:34.117: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:34.117: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:34.118: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:34.118: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:34.119: INFO: Aug 27 14:35:34.119: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 27 14:35:35.126: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:35.126: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:35.127: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:35.127: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:35.127: INFO: Aug 27 14:35:35.127: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 27 14:35:36.137: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:36.137: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:36.137: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:36.137: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:36.138: INFO: Aug 27 14:35:36.138: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 27 14:35:37.265: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:37.265: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:37.265: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:37.265: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:37.266: INFO: Aug 27 14:35:37.266: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 27 14:35:38.283: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:38.283: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:38.284: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:38.285: INFO: Aug 27 14:35:38.285: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 27 14:35:39.294: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:39.294: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:39.295: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:39.295: INFO: Aug 27 14:35:39.295: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 27 14:35:40.303: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:40.303: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:40.304: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:40.304: INFO: Aug 27 14:35:40.304: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 27 14:35:41.313: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:41.313: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:41.314: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:41.314: INFO: Aug 27 14:35:41.314: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 27 14:35:42.323: INFO: POD NODE PHASE GRACE CONDITIONS Aug 27 14:35:42.323: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:34:41 +0000 UTC }] Aug 27 14:35:42.324: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 14:35:03 +0000 UTC }] Aug 27 14:35:42.324: INFO: Aug 27 14:35:42.324: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-tkpgh Aug 27 14:35:43.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:35:44.634: INFO: rc: 1 Aug 27 14:35:44.635: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0x400134e5d0 exit status 1 true [0x4000d70028 0x4000d70040 0x4000d70058] [0x4000d70028 0x4000d70040 0x4000d70058] [0x4000d70038 0x4000d70050] [0x856db8 0x856db8] 0x40014ecc00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Aug 27 14:35:54.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:35:55.820: INFO: rc: 1 Aug 27 14:35:55.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400157e120 exit status 1 true [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6ca0 0x40002e6cf0] [0x856db8 0x856db8] 0x400181fd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:36:05.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:36:06.981: INFO: rc: 1 Aug 27 14:36:06.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001adaae0 exit status 1 true [0x4001322028 0x4001322040 0x4001322058] [0x4001322028 0x4001322040 0x4001322058] [0x4001322038 0x4001322050] [0x856db8 0x856db8] 0x40017ea5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:36:16.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:36:18.090: INFO: rc: 1 Aug 27 14:36:18.091: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400157e270 exit status 1 true [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d90 0x40002e6db8] [0x856db8 0x856db8] 0x4000ec11a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:36:28.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:36:29.236: INFO: rc: 1 Aug 27 14:36:29.237: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e6f0 exit status 1 true [0x4000d70060 0x4000d70078 0x4000d70090] [0x4000d70060 0x4000d70078 0x4000d70090] [0x4000d70070 0x4000d70088] [0x856db8 0x856db8] 0x40014ed200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:36:39.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:36:40.321: INFO: rc: 1 Aug 27 14:36:40.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001b40f30 exit status 1 true [0x4001a44078 0x4001a44090 0x4001a440a8] [0x4001a44078 0x4001a44090 0x4001a440a8] [0x4001a44088 0x4001a440a0] [0x856db8 0x856db8] 0x4001ce9320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:36:50.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:36:51.482: INFO: rc: 1 Aug 27 14:36:51.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e840 exit status 1 true [0x4000d70098 0x4000d700b0 0x4000d700c8] [0x4000d70098 0x4000d700b0 0x4000d700c8] [0x4000d700a8 0x4000d700c0] [0x856db8 0x856db8] 0x40014ed4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:01.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:02.629: INFO: rc: 1 Aug 27 14:37:02.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e960 exit status 1 true [0x4000d700d0 0x4000d700e8 0x4000d70100] [0x4000d700d0 0x4000d700e8 0x4000d70100] [0x4000d700e0 0x4000d700f8] [0x856db8 0x856db8] 0x40014ed920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:12.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:13.794: INFO: rc: 1 Aug 27 14:37:13.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001b41230 exit status 1 true [0x4001a440b0 0x4001a440c8 0x4001a440e0] [0x4001a440b0 0x4001a440c8 0x4001a440e0] [0x4001a440c0 0x4001a440d8] [0x856db8 0x856db8] 0x4000fbcb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:23.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:24.902: INFO: rc: 1 Aug 27 14:37:24.902: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001adac30 exit status 1 true [0x4001322060 0x4001322078 0x4001322090] [0x4001322060 0x4001322078 0x4001322090] [0x4001322070 0x4001322088] [0x856db8 0x856db8] 0x40017ea960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:34.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:36.005: INFO: rc: 1 Aug 27 14:37:36.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134edb0 exit status 1 true [0x4000d70108 0x4000d70120 0x4000d70140] [0x4000d70108 0x4000d70120 0x4000d70140] [0x4000d70118 0x4000d70138] [0x856db8 0x856db8] 0x40014edda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:46.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:47.137: INFO: rc: 1 Aug 27 14:37:47.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001b40300 exit status 1 true [0x4001a44008 0x4001a44020 0x4001a44038] [0x4001a44008 0x4001a44020 0x4001a44038] [0x4001a44018 0x4001a44030] [0x856db8 0x856db8] 0x400181fd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:37:57.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:37:58.282: INFO: rc: 1 Aug 27 14:37:58.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada120 exit status 1 true [0x4001322000 0x4001322018 0x4001322030] [0x4001322000 0x4001322018 0x4001322030] [0x4001322010 0x4001322028] [0x856db8 0x856db8] 0x4001ce9200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:38:08.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:38:09.372: INFO: rc: 1 Aug 27 14:38:09.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada210 exit status 1 true [0x4001322038 0x4001322050 0x4001322068] [0x4001322038 0x4001322050 0x4001322068] [0x4001322048 0x4001322060] [0x856db8 0x856db8] 0x4001ce9500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:38:19.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:38:20.473: INFO: rc: 1 Aug 27 14:38:20.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e180 exit status 1 true [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6ca0 0x40002e6cf0] [0x856db8 0x856db8] 0x4001557560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:38:30.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:38:31.634: INFO: rc: 1 Aug 27 14:38:31.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e5a0 exit status 1 true [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d10 0x40002e6da0 0x40002e6dc0] [0x40002e6d90 0x40002e6db8] [0x856db8 0x856db8] 0x40013310e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:38:41.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:38:42.796: INFO: rc: 1 Aug 27 14:38:42.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400157e180 exit status 1 true [0x4000d70000 0x4000d70018 0x4000d70030] [0x4000d70000 0x4000d70018 0x4000d70030] [0x4000d70010 0x4000d70028] [0x856db8 0x856db8] 0x40014ec900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:38:52.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:38:53.929: INFO: rc: 1 Aug 27 14:38:53.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400157e300 exit status 1 true [0x4000d70038 0x4000d70050 0x4000d70068] [0x4000d70038 0x4000d70050 0x4000d70068] [0x4000d70048 0x4000d70060] [0x856db8 0x856db8] 0x40014ed140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:39:03.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:39:05.413: INFO: rc: 1 Aug 27 14:39:05.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada360 exit status 1 true [0x4001322070 0x4001322088 0x40013220a0] [0x4001322070 0x4001322088 0x40013220a0] [0x4001322080 0x4001322098] [0x856db8 0x856db8] 0x4000ec1380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:39:15.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:39:17.002: INFO: rc: 1 Aug 27 14:39:17.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada480 exit status 1 true [0x40013220a8 0x40013220c0 0x40013220d8] [0x40013220a8 0x40013220c0 0x40013220d8] [0x40013220b8 0x40013220d0] [0x856db8 0x856db8] 0x4000ec1620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:39:27.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:39:28.178: INFO: rc: 1 Aug 27 14:39:28.179: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada5a0 exit status 1 true [0x40013220e0 0x40013220f8 0x4001322110] [0x40013220e0 0x40013220f8 0x4001322110] [0x40013220f0 0x4001322108] [0x856db8 0x856db8] 0x4000ec1920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:39:38.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:39:39.355: INFO: rc: 1 Aug 27 14:39:39.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada6f0 exit status 1 true [0x4001322118 0x4001322130 0x4001322148] [0x4001322118 0x4001322130 0x4001322148] [0x4001322128 0x4001322140] [0x856db8 0x856db8] 0x4000ec1bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:39:49.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:39:50.587: INFO: rc: 1 Aug 27 14:39:50.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001ada150 exit status 1 true [0x4000d70000 0x4000d70018 0x4000d70030] [0x4000d70000 0x4000d70018 0x4000d70030] [0x4000d70010 0x4000d70028] [0x856db8 0x856db8] 0x4001275da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:40:00.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:40:01.728: INFO: rc: 1 Aug 27 14:40:01.728: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x4001b40420 exit status 1 true [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6c60 0x40002e6cd8 0x40002e6d00] [0x40002e6ca0 0x40002e6cf0] [0x856db8 0x856db8] 0x400195dec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:40:11.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:40:12.872: INFO: rc: 1 Aug 27 14:40:12.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e150 exit status 1 true [0x4001322000 0x4001322018 0x4001322030] [0x4001322000 0x4001322018 0x4001322030] [0x4001322010 0x4001322028] [0x856db8 0x856db8] 0x400181fd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:40:22.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:40:24.041: INFO: rc: 1 Aug 27 14:40:24.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e600 exit status 1 true [0x4001322038 0x4001322050 0x4001322068] [0x4001322038 0x4001322050 0x4001322068] [0x4001322048 0x4001322060] [0x856db8 0x856db8] 0x4000ec11a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:40:34.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:40:35.274: INFO: rc: 1 Aug 27 14:40:35.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0x400134e720 exit status 1 true [0x4001322070 0x4001322088 0x40013220a0] [0x4001322070 0x4001322088 0x40013220a0] [0x4001322080 0x4001322098] [0x856db8 0x856db8] 0x4000ec1500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 27 14:40:45.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tkpgh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 27 14:40:46.416: INFO: rc: 1 Aug 27 14:40:46.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Aug 27 14:40:46.416: INFO: Scaling statefulset ss to 0 Aug 27 14:40:46.438: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 27 14:40:46.441: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tkpgh Aug 27 14:40:46.444: INFO: Scaling statefulset ss to 0 Aug 27 14:40:46.456: INFO: Waiting for statefulset status.replicas updated to 0 Aug 27 14:40:46.459: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:40:46.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tkpgh" for this suite. Aug 27 14:40:54.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:40:54.785: INFO: namespace: e2e-tests-statefulset-tkpgh, resource: bindings, ignored listing per whitelist Aug 27 14:40:54.826: INFO: namespace e2e-tests-statefulset-tkpgh deletion completed in 8.307279807s • [SLOW TEST:373.361 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:40:54.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 27 14:40:55.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tm2dp' Aug 27 14:41:00.632: INFO: stderr: "" Aug 27 14:41:00.632: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Aug 27 14:41:00.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-tm2dp' Aug 27 14:41:05.886: INFO: stderr: "" Aug 27 14:41:05.886: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:41:05.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tm2dp" for this suite. Aug 27 14:41:12.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:41:12.093: INFO: namespace: e2e-tests-kubectl-tm2dp, resource: bindings, ignored listing per whitelist Aug 27 14:41:12.118: INFO: namespace e2e-tests-kubectl-tm2dp deletion completed in 6.224414429s • [SLOW TEST:17.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:41:12.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 27 14:41:12.525: INFO: Waiting up to 5m0s for pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-drzlj" to be "success or failure" Aug 27 14:41:12.543: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.612258ms Aug 27 14:41:14.549: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023540341s Aug 27 14:41:16.554: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028797924s Aug 27 14:41:18.659: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13362535s Aug 27 14:41:20.847: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.321598227s Aug 27 14:41:22.953: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.427386433s STEP: Saw pod success Aug 27 14:41:22.953: INFO: Pod "pod-5a7c0cfe-e873-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:41:22.958: INFO: Trying to get logs from node hunter-worker pod pod-5a7c0cfe-e873-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:41:23.830: INFO: Waiting for pod pod-5a7c0cfe-e873-11ea-894e-0242ac11000d to disappear Aug 27 14:41:24.739: INFO: Pod pod-5a7c0cfe-e873-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:41:24.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-drzlj" for this suite. Aug 27 14:41:34.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:41:34.218: INFO: namespace: e2e-tests-emptydir-drzlj, resource: bindings, ignored listing per whitelist Aug 27 14:41:34.225: INFO: namespace e2e-tests-emptydir-drzlj deletion completed in 9.297453527s • [SLOW TEST:22.106 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:41:34.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-67a50df5-e873-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:41:34.623: INFO: Waiting up to 5m0s for pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-5p4d2" to be "success or failure" Aug 27 14:41:34.691: INFO: Pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 67.150278ms Aug 27 14:41:36.841: INFO: Pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217513794s Aug 27 14:41:38.846: INFO: Pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.22287568s Aug 27 14:41:40.853: INFO: Pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229852702s STEP: Saw pod success Aug 27 14:41:40.854: INFO: Pod "pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:41:40.867: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 14:41:40.970: INFO: Waiting for pod pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d to disappear Aug 27 14:41:40.974: INFO: Pod pod-configmaps-67a6d67f-e873-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:41:40.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5p4d2" for this suite. Aug 27 14:41:49.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:41:49.149: INFO: namespace: e2e-tests-configmap-5p4d2, resource: bindings, ignored listing per whitelist Aug 27 14:41:49.174: INFO: namespace e2e-tests-configmap-5p4d2 deletion completed in 8.193043882s • [SLOW TEST:14.946 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:41:49.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-59k5 STEP: Creating a pod to test atomic-volume-subpath Aug 27 14:41:49.481: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-59k5" in namespace "e2e-tests-subpath-vw9dn" to be "success or failure" Aug 27 14:41:49.584: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Pending", Reason="", readiness=false. Elapsed: 102.981607ms Aug 27 14:41:51.591: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110486639s Aug 27 14:41:53.597: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116227094s Aug 27 14:41:55.943: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462607578s Aug 27 14:41:58.176: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.695357609s Aug 27 14:42:00.183: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 10.701937468s Aug 27 14:42:02.188: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 12.707173692s Aug 27 14:42:04.194: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 14.712931817s Aug 27 14:42:06.200: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 16.719338284s Aug 27 14:42:08.230: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 18.749555335s Aug 27 14:42:10.238: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 20.757014018s Aug 27 14:42:12.245: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 22.763842048s Aug 27 14:42:14.252: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Running", Reason="", readiness=false. Elapsed: 24.771591435s Aug 27 14:42:16.259: INFO: Pod "pod-subpath-test-configmap-59k5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.778359918s STEP: Saw pod success Aug 27 14:42:16.260: INFO: Pod "pod-subpath-test-configmap-59k5" satisfied condition "success or failure" Aug 27 14:42:16.330: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-59k5 container test-container-subpath-configmap-59k5: STEP: delete the pod Aug 27 14:42:16.508: INFO: Waiting for pod pod-subpath-test-configmap-59k5 to disappear Aug 27 14:42:16.520: INFO: Pod pod-subpath-test-configmap-59k5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-59k5 Aug 27 14:42:16.521: INFO: Deleting pod "pod-subpath-test-configmap-59k5" in namespace "e2e-tests-subpath-vw9dn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:42:16.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vw9dn" for this suite. Aug 27 14:42:22.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:42:22.674: INFO: namespace: e2e-tests-subpath-vw9dn, resource: bindings, ignored listing per whitelist Aug 27 14:42:22.693: INFO: namespace e2e-tests-subpath-vw9dn deletion completed in 6.157197616s • [SLOW TEST:33.518 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:42:22.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 27 14:42:22.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xz22p' Aug 27 14:42:24.173: INFO: stderr: "" Aug 27 14:42:24.173: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 27 14:42:29.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xz22p -o json' Aug 27 14:42:30.477: INFO: stderr: "" Aug 27 14:42:30.477: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-27T14:42:24Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-xz22p\",\n \"resourceVersion\": \"2649959\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-xz22p/pods/e2e-test-nginx-pod\",\n \"uid\": \"852455a5-e873-11ea-a485-0242ac120004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-grqwg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-grqwg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-grqwg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-27T14:42:24Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-27T14:42:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-27T14:42:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-27T14:42:24Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6c3d50ae39714f35a0ae06949645736b0e9e6356b6da801c891eda1ddd6b847c\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-27T14:42:27Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.200\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-27T14:42:24Z\"\n }\n}\n" STEP: replace the image in the pod Aug 27 14:42:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-xz22p' Aug 27 14:42:32.113: INFO: stderr: "" Aug 27 14:42:32.113: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Aug 27 14:42:32.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xz22p' Aug 27 14:42:48.403: INFO: stderr: "" Aug 27 14:42:48.403: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:42:48.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xz22p" for this suite. Aug 27 14:42:54.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:42:54.463: INFO: namespace: e2e-tests-kubectl-xz22p, resource: bindings, ignored listing per whitelist Aug 27 14:42:54.561: INFO: namespace e2e-tests-kubectl-xz22p deletion completed in 6.148661021s • [SLOW TEST:31.867 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:42:54.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-977712a4-e873-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:42:54.904: INFO: Waiting up to 5m0s for pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-qjkxd" to be "success or failure" Aug 27 14:42:54.924: INFO: Pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.261718ms Aug 27 14:42:57.080: INFO: Pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174946992s Aug 27 14:42:59.086: INFO: Pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.181177343s Aug 27 14:43:01.093: INFO: Pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188823442s STEP: Saw pod success Aug 27 14:43:01.094: INFO: Pod "pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:43:01.099: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 14:43:01.146: INFO: Waiting for pod pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d to disappear Aug 27 14:43:01.162: INFO: Pod pod-configmaps-978175c6-e873-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:43:01.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qjkxd" for this suite. Aug 27 14:43:11.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:43:11.555: INFO: namespace: e2e-tests-configmap-qjkxd, resource: bindings, ignored listing per whitelist Aug 27 14:43:11.577: INFO: namespace e2e-tests-configmap-qjkxd deletion completed in 10.404408314s • [SLOW TEST:17.015 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:43:11.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0827 14:43:44.439667 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 14:43:44.440: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:43:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qhmkt" for this suite. Aug 27 14:43:54.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:43:54.609: INFO: namespace: e2e-tests-gc-qhmkt, resource: bindings, ignored listing per whitelist Aug 27 14:43:54.675: INFO: namespace e2e-tests-gc-qhmkt deletion completed in 10.227802631s • [SLOW TEST:43.098 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:43:54.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 27 14:43:55.797: INFO: Waiting up to 5m0s for pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-qzdmv" to be "success or failure" Aug 27 14:43:56.106: INFO: Pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 307.863341ms Aug 27 14:43:58.113: INFO: Pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315307677s Aug 27 14:44:00.120: INFO: Pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322817334s Aug 27 14:44:02.128: INFO: Pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330082354s STEP: Saw pod success Aug 27 14:44:02.128: INFO: Pod "pod-bbb3067b-e873-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:44:02.133: INFO: Trying to get logs from node hunter-worker2 pod pod-bbb3067b-e873-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:44:02.335: INFO: Waiting for pod pod-bbb3067b-e873-11ea-894e-0242ac11000d to disappear Aug 27 14:44:02.392: INFO: Pod pod-bbb3067b-e873-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:44:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qzdmv" for this suite. Aug 27 14:44:08.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:44:08.965: INFO: namespace: e2e-tests-emptydir-qzdmv, resource: bindings, ignored listing per whitelist Aug 27 14:44:09.029: INFO: namespace e2e-tests-emptydir-qzdmv deletion completed in 6.307483931s • [SLOW TEST:14.353 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:44:09.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:44:09.495: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 27 14:44:09.529: INFO: Number of nodes with available pods: 0 Aug 27 14:44:09.529: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 27 14:44:09.660: INFO: Number of nodes with available pods: 0 Aug 27 14:44:09.660: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:10.830: INFO: Number of nodes with available pods: 0 Aug 27 14:44:10.830: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:11.667: INFO: Number of nodes with available pods: 0 Aug 27 14:44:11.667: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:12.667: INFO: Number of nodes with available pods: 0 Aug 27 14:44:12.667: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:13.667: INFO: Number of nodes with available pods: 0 Aug 27 14:44:13.667: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:14.728: INFO: Number of nodes with available pods: 0 Aug 27 14:44:14.728: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:15.667: INFO: Number of nodes with available pods: 1 Aug 27 14:44:15.667: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 27 14:44:15.753: INFO: Number of nodes with available pods: 1 Aug 27 14:44:15.753: INFO: Number of running nodes: 0, number of available pods: 1 Aug 27 14:44:16.761: INFO: Number of nodes with available pods: 0 Aug 27 14:44:16.761: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 27 14:44:16.907: INFO: Number of nodes with available pods: 0 Aug 27 14:44:16.907: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:17.941: INFO: Number of nodes with available pods: 0 Aug 27 14:44:17.941: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:18.914: INFO: Number of nodes with available pods: 0 Aug 27 14:44:18.914: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:19.913: INFO: Number of nodes with available pods: 0 Aug 27 14:44:19.914: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:20.913: INFO: Number of nodes with available pods: 0 Aug 27 14:44:20.913: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:21.912: INFO: Number of nodes with available pods: 0 Aug 27 14:44:21.913: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:22.926: INFO: Number of nodes with available pods: 0 Aug 27 14:44:22.927: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:23.917: INFO: Number of nodes with available pods: 0 Aug 27 14:44:23.918: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:24.968: INFO: Number of nodes with available pods: 0 Aug 27 14:44:24.968: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:25.912: INFO: Number of nodes with available pods: 0 Aug 27 14:44:25.912: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:26.914: INFO: Number of nodes with available pods: 0 Aug 27 14:44:26.914: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:27.951: INFO: Number of nodes with available pods: 0 Aug 27 14:44:27.951: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:28.914: INFO: Number of nodes with available pods: 0 Aug 27 14:44:28.914: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:29.913: INFO: Number of nodes with available pods: 0 Aug 27 14:44:29.913: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:30.974: INFO: Number of nodes with available pods: 0 Aug 27 14:44:30.974: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:31.915: INFO: Number of nodes with available pods: 0 Aug 27 14:44:31.915: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:32.914: INFO: Number of nodes with available pods: 0 Aug 27 14:44:32.915: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:34.292: INFO: Number of nodes with available pods: 0 Aug 27 14:44:34.292: INFO: Node hunter-worker is running more than one daemon pod Aug 27 14:44:34.912: INFO: Number of nodes with available pods: 1 Aug 27 14:44:34.912: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kwgrp, will wait for the garbage collector to delete the pods Aug 27 14:44:34.988: INFO: Deleting DaemonSet.extensions daemon-set took: 12.079107ms Aug 27 14:44:35.189: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.824517ms Aug 27 14:44:38.395: INFO: Number of nodes with available pods: 0 Aug 27 14:44:38.395: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 14:44:38.400: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kwgrp/daemonsets","resourceVersion":"2650387"},"items":null} Aug 27 14:44:38.403: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kwgrp/pods","resourceVersion":"2650387"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:44:38.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kwgrp" for this suite. Aug 27 14:44:46.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:44:46.574: INFO: namespace: e2e-tests-daemonsets-kwgrp, resource: bindings, ignored listing per whitelist Aug 27 14:44:46.713: INFO: namespace e2e-tests-daemonsets-kwgrp deletion completed in 8.235398178s • [SLOW TEST:37.684 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:44:46.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0827 14:44:57.131708 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 27 14:44:57.131: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:44:57.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mzmwl" for this suite. Aug 27 14:45:03.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:45:03.362: INFO: namespace: e2e-tests-gc-mzmwl, resource: bindings, ignored listing per whitelist Aug 27 14:45:03.442: INFO: namespace e2e-tests-gc-mzmwl deletion completed in 6.302816157s • [SLOW TEST:16.728 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:45:03.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Aug 27 14:45:04.117: INFO: Waiting up to 5m0s for pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d" in namespace "e2e-tests-var-expansion-8mhm6" to be "success or failure" Aug 27 14:45:04.362: INFO: Pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 245.857461ms Aug 27 14:45:06.368: INFO: Pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251312761s Aug 27 14:45:08.561: INFO: Pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443942948s Aug 27 14:45:10.566: INFO: Pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449171666s STEP: Saw pod success Aug 27 14:45:10.566: INFO: Pod "var-expansion-e4860333-e873-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:45:10.570: INFO: Trying to get logs from node hunter-worker pod var-expansion-e4860333-e873-11ea-894e-0242ac11000d container dapi-container: STEP: delete the pod Aug 27 14:45:10.593: INFO: Waiting for pod var-expansion-e4860333-e873-11ea-894e-0242ac11000d to disappear Aug 27 14:45:10.597: INFO: Pod var-expansion-e4860333-e873-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:45:10.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8mhm6" for this suite. Aug 27 14:45:16.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:45:16.696: INFO: namespace: e2e-tests-var-expansion-8mhm6, resource: bindings, ignored listing per whitelist Aug 27 14:45:16.751: INFO: namespace e2e-tests-var-expansion-8mhm6 deletion completed in 6.142664275s • [SLOW TEST:13.307 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:45:16.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:45:16.963: INFO: Creating ReplicaSet my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d Aug 27 14:45:16.979: INFO: Pod name my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d: Found 0 pods out of 1 Aug 27 14:45:21.986: INFO: Pod name my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d: Found 1 pods out of 1 Aug 27 14:45:21.986: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d" is running Aug 27 14:45:21.991: INFO: Pod "my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d-fmsh4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:45:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:45:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:45:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 14:45:16 +0000 UTC Reason: Message:}]) Aug 27 14:45:21.992: INFO: Trying to dial the pod Aug 27 14:45:27.007: INFO: Controller my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d: Got expected result from replica 1 [my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d-fmsh4]: "my-hostname-basic-ec2f6658-e873-11ea-894e-0242ac11000d-fmsh4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:45:27.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-mq2zk" for this suite. Aug 27 14:45:33.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:45:33.269: INFO: namespace: e2e-tests-replicaset-mq2zk, resource: bindings, ignored listing per whitelist Aug 27 14:45:33.412: INFO: namespace e2e-tests-replicaset-mq2zk deletion completed in 6.397170162s • [SLOW TEST:16.661 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:45:33.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Aug 27 14:45:33.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:35.606: INFO: stderr: "" Aug 27 14:45:35.606: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 27 14:45:35.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:36.912: INFO: stderr: "" Aug 27 14:45:36.912: INFO: stdout: "update-demo-nautilus-nhctq update-demo-nautilus-z6r2m " Aug 27 14:45:36.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhctq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:38.332: INFO: stderr: "" Aug 27 14:45:38.333: INFO: stdout: "" Aug 27 14:45:38.333: INFO: update-demo-nautilus-nhctq is created but not running Aug 27 14:45:43.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:44.582: INFO: stderr: "" Aug 27 14:45:44.582: INFO: stdout: "update-demo-nautilus-nhctq update-demo-nautilus-z6r2m " Aug 27 14:45:44.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhctq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:45.783: INFO: stderr: "" Aug 27 14:45:45.783: INFO: stdout: "true" Aug 27 14:45:45.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhctq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:47.168: INFO: stderr: "" Aug 27 14:45:47.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 27 14:45:47.168: INFO: validating pod update-demo-nautilus-nhctq Aug 27 14:45:47.551: INFO: got data: { "image": "nautilus.jpg" } Aug 27 14:45:47.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 27 14:45:47.553: INFO: update-demo-nautilus-nhctq is verified up and running Aug 27 14:45:47.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6r2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:48.917: INFO: stderr: "" Aug 27 14:45:48.917: INFO: stdout: "true" Aug 27 14:45:48.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6r2m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:45:50.230: INFO: stderr: "" Aug 27 14:45:50.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 27 14:45:50.230: INFO: validating pod update-demo-nautilus-z6r2m Aug 27 14:45:50.234: INFO: got data: { "image": "nautilus.jpg" } Aug 27 14:45:50.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 27 14:45:50.234: INFO: update-demo-nautilus-z6r2m is verified up and running STEP: rolling-update to new replication controller Aug 27 14:45:50.303: INFO: scanned /root for discovery docs: Aug 27 14:45:50.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:15.266: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 27 14:46:15.266: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 27 14:46:15.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:16.605: INFO: stderr: "" Aug 27 14:46:16.605: INFO: stdout: "update-demo-kitten-2cb96 update-demo-kitten-q9t9j " Aug 27 14:46:16.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2cb96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:17.846: INFO: stderr: "" Aug 27 14:46:17.846: INFO: stdout: "true" Aug 27 14:46:17.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2cb96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:19.013: INFO: stderr: "" Aug 27 14:46:19.013: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 27 14:46:19.013: INFO: validating pod update-demo-kitten-2cb96 Aug 27 14:46:19.017: INFO: got data: { "image": "kitten.jpg" } Aug 27 14:46:19.017: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 27 14:46:19.017: INFO: update-demo-kitten-2cb96 is verified up and running Aug 27 14:46:19.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q9t9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:20.255: INFO: stderr: "" Aug 27 14:46:20.256: INFO: stdout: "true" Aug 27 14:46:20.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q9t9j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7wbhs' Aug 27 14:46:21.481: INFO: stderr: "" Aug 27 14:46:21.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 27 14:46:21.481: INFO: validating pod update-demo-kitten-q9t9j Aug 27 14:46:21.508: INFO: got data: { "image": "kitten.jpg" } Aug 27 14:46:21.508: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 27 14:46:21.508: INFO: update-demo-kitten-q9t9j is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:46:21.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7wbhs" for this suite. Aug 27 14:46:47.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:46:47.581: INFO: namespace: e2e-tests-kubectl-7wbhs, resource: bindings, ignored listing per whitelist Aug 27 14:46:47.644: INFO: namespace e2e-tests-kubectl-7wbhs deletion completed in 26.12771628s • [SLOW TEST:74.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:46:47.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Aug 27 14:46:47.823: INFO: Waiting up to 5m0s for pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d" in namespace "e2e-tests-var-expansion-d4bs9" to be "success or failure" Aug 27 14:46:47.878: INFO: Pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.135754ms Aug 27 14:46:49.886: INFO: Pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062903061s Aug 27 14:46:51.891: INFO: Pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.068097706s Aug 27 14:46:53.932: INFO: Pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109326989s STEP: Saw pod success Aug 27 14:46:53.933: INFO: Pod "var-expansion-2255573d-e874-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:46:53.938: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-2255573d-e874-11ea-894e-0242ac11000d container dapi-container: STEP: delete the pod Aug 27 14:46:53.965: INFO: Waiting for pod var-expansion-2255573d-e874-11ea-894e-0242ac11000d to disappear Aug 27 14:46:53.988: INFO: Pod var-expansion-2255573d-e874-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:46:53.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-d4bs9" for this suite. Aug 27 14:47:00.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:47:00.129: INFO: namespace: e2e-tests-var-expansion-d4bs9, resource: bindings, ignored listing per whitelist Aug 27 14:47:00.138: INFO: namespace e2e-tests-var-expansion-d4bs9 deletion completed in 6.144867436s • [SLOW TEST:12.492 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:47:00.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Aug 27 14:47:00.443: INFO: Waiting up to 5m0s for pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d" in namespace "e2e-tests-containers-wqk6b" to be "success or failure" Aug 27 14:47:00.452: INFO: Pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.873927ms Aug 27 14:47:02.456: INFO: Pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012215542s Aug 27 14:47:04.461: INFO: Pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017258977s Aug 27 14:47:06.465: INFO: Pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021389155s STEP: Saw pod success Aug 27 14:47:06.465: INFO: Pod "client-containers-29d352ec-e874-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:47:06.468: INFO: Trying to get logs from node hunter-worker2 pod client-containers-29d352ec-e874-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 14:47:06.579: INFO: Waiting for pod client-containers-29d352ec-e874-11ea-894e-0242ac11000d to disappear Aug 27 14:47:06.656: INFO: Pod client-containers-29d352ec-e874-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:47:06.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-wqk6b" for this suite. Aug 27 14:47:12.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:47:12.696: INFO: namespace: e2e-tests-containers-wqk6b, resource: bindings, ignored listing per whitelist Aug 27 14:47:12.782: INFO: namespace e2e-tests-containers-wqk6b deletion completed in 6.12029547s • [SLOW TEST:12.643 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:47:12.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:47:13.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jwnkm" for this suite. Aug 27 14:50:41.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:50:41.235: INFO: namespace: e2e-tests-pods-jwnkm, resource: bindings, ignored listing per whitelist Aug 27 14:50:41.287: INFO: namespace e2e-tests-pods-jwnkm deletion completed in 3m28.210497562s • [SLOW TEST:208.504 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:50:41.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zhw89 in namespace e2e-tests-proxy-6vt66 I0827 14:50:41.536993 7 runners.go:184] Created replication controller with name: proxy-service-zhw89, namespace: e2e-tests-proxy-6vt66, replica count: 1 I0827 14:50:42.589484 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:50:43.590629 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:50:44.591471 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:50:45.592658 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0827 14:50:46.593162 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0827 14:50:47.593550 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0827 14:50:48.594131 7 runners.go:184] proxy-service-zhw89 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 27 14:50:48.602: INFO: setup took 7.13783366s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 27 14:50:48.618: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/http:proxy-service-zhw89-9lch9:160/proxy/: foo (200; 15.241623ms) Aug 27 14:50:48.618: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/proxy-service-zhw89-9lch9:162/proxy/: bar (200; 15.469948ms) Aug 27 14:50:48.618: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/http:proxy-service-zhw89-9lch9:162/proxy/: bar (200; 15.527994ms) Aug 27 14:50:48.618: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/https:proxy-service-zhw89-9lch9:462/proxy/: tls qux (200; 15.473134ms) Aug 27 14:50:48.619: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/proxy-service-zhw89-9lch9:160/proxy/: foo (200; 15.02523ms) Aug 27 14:50:48.619: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6vt66/pods/proxy-service-zhw89-9lch9/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-bd97a625-e874-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:51:08.510: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-4tcwf" to be "success or failure" Aug 27 14:51:08.739: INFO: Pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 228.308146ms Aug 27 14:51:10.745: INFO: Pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234747946s Aug 27 14:51:12.751: INFO: Pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240823221s Aug 27 14:51:14.759: INFO: Pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248374761s STEP: Saw pod success Aug 27 14:51:14.759: INFO: Pod "pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:51:14.764: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 14:51:15.068: INFO: Waiting for pod pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d to disappear Aug 27 14:51:16.326: INFO: Pod pod-projected-configmaps-bd9838d0-e874-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:51:16.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4tcwf" for this suite. Aug 27 14:51:23.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:51:24.088: INFO: namespace: e2e-tests-projected-4tcwf, resource: bindings, ignored listing per whitelist Aug 27 14:51:24.106: INFO: namespace e2e-tests-projected-4tcwf deletion completed in 7.326858505s • [SLOW TEST:16.793 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:51:24.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nlxcd STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 27 14:51:24.903: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 27 14:52:04.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.211:8080/dial?request=hostName&protocol=http&host=10.244.2.210&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-nlxcd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 14:52:04.504: INFO: >>> kubeConfig: /root/.kube/config I0827 14:52:05.043090 7 log.go:172] (0x4001cc8370) (0x4001785220) Create stream I0827 14:52:05.043369 7 log.go:172] (0x4001cc8370) (0x4001785220) Stream added, broadcasting: 1 I0827 14:52:05.047823 7 log.go:172] (0x4001cc8370) Reply frame received for 1 I0827 14:52:05.048075 7 log.go:172] (0x4001cc8370) (0x40010f05a0) Create stream I0827 14:52:05.048168 7 log.go:172] (0x4001cc8370) (0x40010f05a0) Stream added, broadcasting: 3 I0827 14:52:05.049913 7 log.go:172] (0x4001cc8370) Reply frame received for 3 I0827 14:52:05.050048 7 log.go:172] (0x4001cc8370) (0x40010f0820) Create stream I0827 14:52:05.050124 7 log.go:172] (0x4001cc8370) (0x40010f0820) Stream added, broadcasting: 5 I0827 14:52:05.051632 7 log.go:172] (0x4001cc8370) Reply frame received for 5 I0827 14:52:05.128642 7 log.go:172] (0x4001cc8370) Data frame received for 3 I0827 14:52:05.128897 7 log.go:172] (0x40010f05a0) (3) Data frame handling I0827 14:52:05.129020 7 log.go:172] (0x40010f05a0) (3) Data frame sent I0827 14:52:05.129414 7 log.go:172] (0x4001cc8370) Data frame received for 3 I0827 14:52:05.129552 7 log.go:172] (0x40010f05a0) (3) Data frame handling I0827 14:52:05.129658 7 log.go:172] (0x4001cc8370) Data frame received for 5 I0827 14:52:05.129727 7 log.go:172] (0x40010f0820) (5) Data frame handling I0827 14:52:05.130940 7 log.go:172] (0x4001cc8370) Data frame received for 1 I0827 14:52:05.130997 7 log.go:172] (0x4001785220) (1) Data frame handling I0827 14:52:05.131077 7 log.go:172] (0x4001785220) (1) Data frame sent I0827 14:52:05.131170 7 log.go:172] (0x4001cc8370) (0x4001785220) Stream removed, broadcasting: 1 I0827 14:52:05.131287 7 log.go:172] (0x4001cc8370) Go away received I0827 14:52:05.131512 7 log.go:172] (0x4001cc8370) (0x4001785220) Stream removed, broadcasting: 1 I0827 14:52:05.131581 7 log.go:172] (0x4001cc8370) (0x40010f05a0) Stream removed, broadcasting: 3 I0827 14:52:05.131637 7 log.go:172] (0x4001cc8370) (0x40010f0820) Stream removed, broadcasting: 5 Aug 27 14:52:05.132: INFO: Waiting for endpoints: map[] Aug 27 14:52:05.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.211:8080/dial?request=hostName&protocol=http&host=10.244.1.51&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-nlxcd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 27 14:52:05.913: INFO: >>> kubeConfig: /root/.kube/config I0827 14:52:06.176127 7 log.go:172] (0x40003be9a0) (0x400112cfa0) Create stream I0827 14:52:06.176276 7 log.go:172] (0x40003be9a0) (0x400112cfa0) Stream added, broadcasting: 1 I0827 14:52:06.179369 7 log.go:172] (0x40003be9a0) Reply frame received for 1 I0827 14:52:06.179506 7 log.go:172] (0x40003be9a0) (0x400099c8c0) Create stream I0827 14:52:06.179579 7 log.go:172] (0x40003be9a0) (0x400099c8c0) Stream added, broadcasting: 3 I0827 14:52:06.180682 7 log.go:172] (0x40003be9a0) Reply frame received for 3 I0827 14:52:06.180868 7 log.go:172] (0x40003be9a0) (0x400112d040) Create stream I0827 14:52:06.180925 7 log.go:172] (0x40003be9a0) (0x400112d040) Stream added, broadcasting: 5 I0827 14:52:06.181966 7 log.go:172] (0x40003be9a0) Reply frame received for 5 I0827 14:52:06.234855 7 log.go:172] (0x40003be9a0) Data frame received for 3 I0827 14:52:06.235086 7 log.go:172] (0x400099c8c0) (3) Data frame handling I0827 14:52:06.235248 7 log.go:172] (0x40003be9a0) Data frame received for 5 I0827 14:52:06.235402 7 log.go:172] (0x400112d040) (5) Data frame handling I0827 14:52:06.235625 7 log.go:172] (0x400099c8c0) (3) Data frame sent I0827 14:52:06.235768 7 log.go:172] (0x40003be9a0) Data frame received for 3 I0827 14:52:06.235878 7 log.go:172] (0x400099c8c0) (3) Data frame handling I0827 14:52:06.236059 7 log.go:172] (0x40003be9a0) Data frame received for 1 I0827 14:52:06.236207 7 log.go:172] (0x400112cfa0) (1) Data frame handling I0827 14:52:06.236311 7 log.go:172] (0x400112cfa0) (1) Data frame sent I0827 14:52:06.236414 7 log.go:172] (0x40003be9a0) (0x400112cfa0) Stream removed, broadcasting: 1 I0827 14:52:06.236560 7 log.go:172] (0x40003be9a0) Go away received I0827 14:52:06.237161 7 log.go:172] (0x40003be9a0) (0x400112cfa0) Stream removed, broadcasting: 1 I0827 14:52:06.237333 7 log.go:172] (0x40003be9a0) (0x400099c8c0) Stream removed, broadcasting: 3 I0827 14:52:06.237428 7 log.go:172] (0x40003be9a0) (0x400112d040) Stream removed, broadcasting: 5 Aug 27 14:52:06.237: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:52:06.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-nlxcd" for this suite. Aug 27 14:52:30.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:52:30.530: INFO: namespace: e2e-tests-pod-network-test-nlxcd, resource: bindings, ignored listing per whitelist Aug 27 14:52:30.690: INFO: namespace e2e-tests-pod-network-test-nlxcd deletion completed in 24.445073286s • [SLOW TEST:66.583 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:52:30.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-ef2aa377-e874-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:52:31.983: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-dr4wn" to be "success or failure" Aug 27 14:52:32.290: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 306.976584ms Aug 27 14:52:34.829: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.845545355s Aug 27 14:52:37.275: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.291972913s Aug 27 14:52:39.283: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.299315326s Aug 27 14:52:41.290: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.306078626s STEP: Saw pod success Aug 27 14:52:41.290: INFO: Pod "pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:52:41.445: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d container secret-volume-test: STEP: delete the pod Aug 27 14:52:41.513: INFO: Waiting for pod pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d to disappear Aug 27 14:52:41.637: INFO: Pod pod-projected-secrets-ef345f99-e874-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:52:41.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dr4wn" for this suite. Aug 27 14:52:49.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:52:49.762: INFO: namespace: e2e-tests-projected-dr4wn, resource: bindings, ignored listing per whitelist Aug 27 14:52:49.984: INFO: namespace e2e-tests-projected-dr4wn deletion completed in 8.339158247s • [SLOW TEST:19.293 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:52:49.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wdh7f;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wdh7f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.34.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.34.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.34.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.34.225_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wdh7f;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wdh7f.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wdh7f.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wdh7f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.34.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.34.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.34.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.34.225_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 27 14:53:12.899: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.918: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.957: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.960: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.963: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.967: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.971: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.975: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.979: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:12.983: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:13.007: INFO: Lookups using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wdh7f jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc] Aug 27 14:53:18.037: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.054: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.081: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.085: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.089: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.093: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.097: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.101: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.105: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:18.131: INFO: Lookups using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wdh7f jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc] Aug 27 14:53:23.026: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.161: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.165: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.171: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.176: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.181: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.185: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.193: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:23.215: INFO: Lookups using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wdh7f jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc] Aug 27 14:53:28.023: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.044: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.070: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.074: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.078: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.082: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.086: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.089: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.093: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:28.122: INFO: Lookups using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wdh7f jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc] Aug 27 14:53:33.023: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.039: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.065: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.069: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.072: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.076: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.079: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.083: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.086: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.089: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc from pod e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d: the server could not find the requested resource (get pods dns-test-fb553178-e874-11ea-894e-0242ac11000d) Aug 27 14:53:33.112: INFO: Lookups using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d failed for: [wheezy_udp@dns-test-service.e2e-tests-dns-wdh7f wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wdh7f jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f jessie_udp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@dns-test-service.e2e-tests-dns-wdh7f.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wdh7f.svc] Aug 27 14:53:38.337: INFO: DNS probes using e2e-tests-dns-wdh7f/dns-test-fb553178-e874-11ea-894e-0242ac11000d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:53:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wdh7f" for this suite. Aug 27 14:53:47.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:53:47.504: INFO: namespace: e2e-tests-dns-wdh7f, resource: bindings, ignored listing per whitelist Aug 27 14:53:47.544: INFO: namespace e2e-tests-dns-wdh7f deletion completed in 8.230949661s • [SLOW TEST:57.558 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:53:47.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 27 14:53:47.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-74lzx run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 27 14:53:57.788: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0827 14:53:57.629628 2499 log.go:172] (0x40007aa210) (0x400077e140) Create stream\nI0827 14:53:57.632686 2499 log.go:172] (0x40007aa210) (0x400077e140) Stream added, broadcasting: 1\nI0827 14:53:57.645341 2499 log.go:172] (0x40007aa210) Reply frame received for 1\nI0827 14:53:57.645961 2499 log.go:172] (0x40007aa210) (0x400077e1e0) Create stream\nI0827 14:53:57.646051 2499 log.go:172] (0x40007aa210) (0x400077e1e0) Stream added, broadcasting: 3\nI0827 14:53:57.647792 2499 log.go:172] (0x40007aa210) Reply frame received for 3\nI0827 14:53:57.648142 2499 log.go:172] (0x40007aa210) (0x40003e0000) Create stream\nI0827 14:53:57.648245 2499 log.go:172] (0x40007aa210) (0x40003e0000) Stream added, broadcasting: 5\nI0827 14:53:57.650538 2499 log.go:172] (0x40007aa210) Reply frame received for 5\nI0827 14:53:57.650976 2499 log.go:172] (0x40007aa210) (0x400077e280) Create stream\nI0827 14:53:57.651081 2499 log.go:172] (0x40007aa210) (0x400077e280) Stream added, broadcasting: 7\nI0827 14:53:57.652923 2499 log.go:172] (0x40007aa210) Reply frame received for 7\nI0827 14:53:57.656066 2499 log.go:172] (0x400077e1e0) (3) Writing data frame\nI0827 14:53:57.657324 2499 log.go:172] (0x400077e1e0) (3) Writing data frame\nI0827 14:53:57.664068 2499 log.go:172] (0x40007aa210) Data frame received for 5\nI0827 14:53:57.664267 2499 log.go:172] (0x40003e0000) (5) Data frame handling\nI0827 14:53:57.664671 2499 log.go:172] (0x40003e0000) (5) Data frame sent\nI0827 14:53:57.664898 2499 log.go:172] (0x40007aa210) Data frame received for 5\nI0827 14:53:57.664990 2499 log.go:172] (0x40003e0000) (5) Data frame handling\nI0827 14:53:57.665390 2499 log.go:172] (0x40003e0000) (5) Data frame sent\nI0827 14:53:57.694366 2499 log.go:172] (0x40007aa210) Data frame received for 7\nI0827 14:53:57.694531 2499 log.go:172] (0x400077e280) (7) Data frame handling\nI0827 14:53:57.694643 2499 log.go:172] (0x40007aa210) Data frame received for 1\nI0827 14:53:57.694719 2499 log.go:172] (0x400077e140) (1) Data frame handling\nI0827 14:53:57.694787 2499 log.go:172] (0x400077e140) (1) Data frame sent\nI0827 14:53:57.695042 2499 log.go:172] (0x40007aa210) Data frame received for 5\nI0827 14:53:57.696291 2499 log.go:172] (0x40007aa210) (0x400077e1e0) Stream removed, broadcasting: 3\nI0827 14:53:57.697238 2499 log.go:172] (0x40003e0000) (5) Data frame handling\nI0827 14:53:57.697410 2499 log.go:172] (0x40007aa210) (0x400077e140) Stream removed, broadcasting: 1\nI0827 14:53:57.698169 2499 log.go:172] (0x40007aa210) Go away received\nI0827 14:53:57.700086 2499 log.go:172] (0x40007aa210) (0x400077e140) Stream removed, broadcasting: 1\nI0827 14:53:57.700522 2499 log.go:172] (0x40007aa210) (0x400077e1e0) Stream removed, broadcasting: 3\nI0827 14:53:57.700829 2499 log.go:172] (0x40007aa210) (0x40003e0000) Stream removed, broadcasting: 5\nI0827 14:53:57.701487 2499 log.go:172] (0x40007aa210) (0x400077e280) Stream removed, broadcasting: 7\n" Aug 27 14:53:57.790: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:53:59.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-74lzx" for this suite. Aug 27 14:54:10.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:54:10.217: INFO: namespace: e2e-tests-kubectl-74lzx, resource: bindings, ignored listing per whitelist Aug 27 14:54:10.276: INFO: namespace e2e-tests-kubectl-74lzx deletion completed in 10.467058757s • [SLOW TEST:22.731 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:54:10.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2a2f8b5e-e875-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:54:10.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-gh8lv" to be "success or failure" Aug 27 14:54:10.597: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474595ms Aug 27 14:54:12.622: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029586178s Aug 27 14:54:15.072: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479582847s Aug 27 14:54:17.079: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486411212s Aug 27 14:54:19.087: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.493935872s STEP: Saw pod success Aug 27 14:54:19.087: INFO: Pod "pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:54:19.147: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 14:54:19.188: INFO: Waiting for pod pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:54:19.317: INFO: Pod pod-configmaps-2a32e4e5-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:54:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gh8lv" for this suite. Aug 27 14:54:27.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:54:27.442: INFO: namespace: e2e-tests-configmap-gh8lv, resource: bindings, ignored listing per whitelist Aug 27 14:54:27.489: INFO: namespace e2e-tests-configmap-gh8lv deletion completed in 8.163944992s • [SLOW TEST:17.211 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:54:27.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:54:27.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-kdk2k" to be "success or failure" Aug 27 14:54:28.000: INFO: Pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.854733ms Aug 27 14:54:30.117: INFO: Pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125591587s Aug 27 14:54:32.122: INFO: Pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.130670995s Aug 27 14:54:34.128: INFO: Pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136421095s STEP: Saw pod success Aug 27 14:54:34.128: INFO: Pod "downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:54:34.136: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:54:34.149: INFO: Waiting for pod downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:54:34.154: INFO: Pod downwardapi-volume-349a8509-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:54:34.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kdk2k" for this suite. Aug 27 14:54:42.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:54:42.333: INFO: namespace: e2e-tests-projected-kdk2k, resource: bindings, ignored listing per whitelist Aug 27 14:54:42.361: INFO: namespace e2e-tests-projected-kdk2k deletion completed in 8.200926299s • [SLOW TEST:14.872 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:54:42.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-qtlt STEP: Creating a pod to test atomic-volume-subpath Aug 27 14:54:42.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qtlt" in namespace "e2e-tests-subpath-2v8lc" to be "success or failure" Aug 27 14:54:42.588: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Pending", Reason="", readiness=false. Elapsed: 26.814689ms Aug 27 14:54:44.753: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192043627s Aug 27 14:54:46.765: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20411851s Aug 27 14:54:48.884: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323062633s Aug 27 14:54:50.962: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401337649s Aug 27 14:54:53.027: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=true. Elapsed: 10.466525042s Aug 27 14:54:55.057: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 12.49656221s Aug 27 14:54:57.062: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 14.501181217s Aug 27 14:54:59.068: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 16.506973586s Aug 27 14:55:01.073: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 18.512344382s Aug 27 14:55:03.080: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 20.519213919s Aug 27 14:55:05.086: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 22.524842117s Aug 27 14:55:07.092: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 24.53078131s Aug 27 14:55:09.098: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Running", Reason="", readiness=false. Elapsed: 26.537113282s Aug 27 14:55:11.104: INFO: Pod "pod-subpath-test-downwardapi-qtlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.543516358s STEP: Saw pod success Aug 27 14:55:11.105: INFO: Pod "pod-subpath-test-downwardapi-qtlt" satisfied condition "success or failure" Aug 27 14:55:11.243: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-qtlt container test-container-subpath-downwardapi-qtlt: STEP: delete the pod Aug 27 14:55:11.677: INFO: Waiting for pod pod-subpath-test-downwardapi-qtlt to disappear Aug 27 14:55:11.707: INFO: Pod pod-subpath-test-downwardapi-qtlt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qtlt Aug 27 14:55:11.707: INFO: Deleting pod "pod-subpath-test-downwardapi-qtlt" in namespace "e2e-tests-subpath-2v8lc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:55:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2v8lc" for this suite. Aug 27 14:55:17.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:55:18.027: INFO: namespace: e2e-tests-subpath-2v8lc, resource: bindings, ignored listing per whitelist Aug 27 14:55:18.059: INFO: namespace e2e-tests-subpath-2v8lc deletion completed in 6.175184338s • [SLOW TEST:35.696 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:55:18.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-52934630-e875-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 14:55:18.254: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-vpjbh" to be "success or failure" Aug 27 14:55:18.300: INFO: Pod "pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 45.1593ms Aug 27 14:55:20.393: INFO: Pod "pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138339965s Aug 27 14:55:22.397: INFO: Pod "pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142160901s STEP: Saw pod success Aug 27 14:55:22.397: INFO: Pod "pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:55:22.399: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d container projected-secret-volume-test: STEP: delete the pod Aug 27 14:55:22.435: INFO: Waiting for pod pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:55:22.445: INFO: Pod pod-projected-secrets-5293c78f-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:55:22.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vpjbh" for this suite. Aug 27 14:55:28.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:55:28.581: INFO: namespace: e2e-tests-projected-vpjbh, resource: bindings, ignored listing per whitelist Aug 27 14:55:28.596: INFO: namespace e2e-tests-projected-vpjbh deletion completed in 6.144139378s • [SLOW TEST:10.536 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:55:28.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:55:28.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-8rw89" to be "success or failure" Aug 27 14:55:29.238: INFO: Pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 327.013604ms Aug 27 14:55:31.243: INFO: Pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332477963s Aug 27 14:55:33.249: INFO: Pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.338454814s Aug 27 14:55:35.255: INFO: Pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.343936182s STEP: Saw pod success Aug 27 14:55:35.255: INFO: Pod "downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:55:35.259: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:55:35.288: INFO: Waiting for pod downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:55:35.318: INFO: Pod downwardapi-volume-58e23713-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:55:35.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8rw89" for this suite. Aug 27 14:55:41.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:55:41.413: INFO: namespace: e2e-tests-projected-8rw89, resource: bindings, ignored listing per whitelist Aug 27 14:55:41.467: INFO: namespace e2e-tests-projected-8rw89 deletion completed in 6.141929475s • [SLOW TEST:12.870 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:55:41.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6091e53e-e875-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:55:41.884: INFO: Waiting up to 5m0s for pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-wlx5t" to be "success or failure" Aug 27 14:55:41.972: INFO: Pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 87.389945ms Aug 27 14:55:43.977: INFO: Pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092361037s Aug 27 14:55:46.663: INFO: Pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.778106186s Aug 27 14:55:48.668: INFO: Pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.783073448s STEP: Saw pod success Aug 27 14:55:48.668: INFO: Pod "pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:55:48.679: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 14:55:48.704: INFO: Waiting for pod pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:55:48.733: INFO: Pod pod-configmaps-60963b86-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:55:48.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wlx5t" for this suite. Aug 27 14:55:54.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:55:54.799: INFO: namespace: e2e-tests-configmap-wlx5t, resource: bindings, ignored listing per whitelist Aug 27 14:55:54.870: INFO: namespace e2e-tests-configmap-wlx5t deletion completed in 6.128389854s • [SLOW TEST:13.403 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:55:54.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:56:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-p4lq8" for this suite. Aug 27 14:56:09.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:56:09.532: INFO: namespace: e2e-tests-kubelet-test-p4lq8, resource: bindings, ignored listing per whitelist Aug 27 14:56:09.563: INFO: namespace e2e-tests-kubelet-test-p4lq8 deletion completed in 6.392676026s • [SLOW TEST:14.692 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:56:09.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 27 14:56:09.782: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:56:09.797: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:56:09.803: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 27 14:56:09.812: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.813: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:56:09.813: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.813: INFO: Container rally-a0035e6c-0q7zegi3 ready: true, restart count 89 Aug 27 14:56:09.813: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.813: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:56:09.813: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 27 14:56:09.823: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.823: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:56:09.823: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.823: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:56:09.823: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded) Aug 27 14:56:09.823: INFO: Container rally-a0035e6c-x0kfgasz ready: true, restart count 89 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162f27d0a7b599e7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:56:10.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-5tqvs" for this suite. Aug 27 14:56:17.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:56:17.628: INFO: namespace: e2e-tests-sched-pred-5tqvs, resource: bindings, ignored listing per whitelist Aug 27 14:56:17.642: INFO: namespace e2e-tests-sched-pred-5tqvs deletion completed in 6.66325051s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:8.079 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:56:17.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-76145fc3-e875-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:56:17.856: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-jzrnn" to be "success or failure" Aug 27 14:56:17.867: INFO: Pod "pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.213865ms Aug 27 14:56:19.872: INFO: Pod "pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016791581s Aug 27 14:56:21.908: INFO: Pod "pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052118802s STEP: Saw pod success Aug 27 14:56:21.908: INFO: Pod "pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:56:21.911: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 14:56:22.118: INFO: Waiting for pod pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:56:22.171: INFO: Pod pod-projected-configmaps-7616aa78-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:56:22.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jzrnn" for this suite. Aug 27 14:56:28.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:56:28.586: INFO: namespace: e2e-tests-projected-jzrnn, resource: bindings, ignored listing per whitelist Aug 27 14:56:28.639: INFO: namespace e2e-tests-projected-jzrnn deletion completed in 6.460002041s • [SLOW TEST:10.995 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:56:28.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 14:56:28.769: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:56:29.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-5v4tb" for this suite. Aug 27 14:56:35.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:56:36.072: INFO: namespace: e2e-tests-custom-resource-definition-5v4tb, resource: bindings, ignored listing per whitelist Aug 27 14:56:36.072: INFO: namespace e2e-tests-custom-resource-definition-5v4tb deletion completed in 6.184703336s • [SLOW TEST:7.433 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:56:36.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-gmfb4 I0827 14:56:36.236318 7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-gmfb4, replica count: 1 I0827 14:56:37.287536 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:56:38.288148 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:56:39.288638 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0827 14:56:40.289237 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 27 14:56:40.425: INFO: Created: latency-svc-mfb24 Aug 27 14:56:40.468: INFO: Got endpoints: latency-svc-mfb24 [76.197084ms] Aug 27 14:56:40.509: INFO: Created: latency-svc-h5l99 Aug 27 14:56:40.525: INFO: Got endpoints: latency-svc-h5l99 [56.036065ms] Aug 27 14:56:40.547: INFO: Created: latency-svc-sfr22 Aug 27 14:56:40.590: INFO: Got endpoints: latency-svc-sfr22 [121.555321ms] Aug 27 14:56:40.624: INFO: Created: latency-svc-mv789 Aug 27 14:56:40.633: INFO: Got endpoints: latency-svc-mv789 [164.420534ms] Aug 27 14:56:40.657: INFO: Created: latency-svc-qm9x2 Aug 27 14:56:40.670: INFO: Got endpoints: latency-svc-qm9x2 [200.179815ms] Aug 27 14:56:40.735: INFO: Created: latency-svc-656fb Aug 27 14:56:40.736: INFO: Got endpoints: latency-svc-656fb [266.80496ms] Aug 27 14:56:40.800: INFO: Created: latency-svc-m7sgt Aug 27 14:56:40.828: INFO: Got endpoints: latency-svc-m7sgt [359.160431ms] Aug 27 14:56:40.884: INFO: Created: latency-svc-xrq26 Aug 27 14:56:40.906: INFO: Got endpoints: latency-svc-xrq26 [437.34851ms] Aug 27 14:56:41.052: INFO: Created: latency-svc-k55zq Aug 27 14:56:41.057: INFO: Got endpoints: latency-svc-k55zq [587.722537ms] Aug 27 14:56:41.126: INFO: Created: latency-svc-svjqb Aug 27 14:56:41.141: INFO: Got endpoints: latency-svc-svjqb [671.465242ms] Aug 27 14:56:41.193: INFO: Created: latency-svc-vx74d Aug 27 14:56:41.215: INFO: Got endpoints: latency-svc-vx74d [746.27516ms] Aug 27 14:56:41.255: INFO: Created: latency-svc-9xxb7 Aug 27 14:56:41.279: INFO: Got endpoints: latency-svc-9xxb7 [810.16318ms] Aug 27 14:56:41.379: INFO: Created: latency-svc-57xnx Aug 27 14:56:41.393: INFO: Got endpoints: latency-svc-57xnx [923.453873ms] Aug 27 14:56:41.697: INFO: Created: latency-svc-wcc7z Aug 27 14:56:41.898: INFO: Got endpoints: latency-svc-wcc7z [1.428365003s] Aug 27 14:56:42.086: INFO: Created: latency-svc-hxt85 Aug 27 14:56:42.357: INFO: Got endpoints: latency-svc-hxt85 [1.888155327s] Aug 27 14:56:42.642: INFO: Created: latency-svc-5snmp Aug 27 14:56:42.658: INFO: Got endpoints: latency-svc-5snmp [2.188657833s] Aug 27 14:56:42.719: INFO: Created: latency-svc-hhfvh Aug 27 14:56:42.758: INFO: Got endpoints: latency-svc-hhfvh [2.232867776s] Aug 27 14:56:42.808: INFO: Created: latency-svc-6jpgx Aug 27 14:56:42.826: INFO: Got endpoints: latency-svc-6jpgx [2.23522051s] Aug 27 14:56:42.852: INFO: Created: latency-svc-jrwwg Aug 27 14:56:42.920: INFO: Got endpoints: latency-svc-jrwwg [2.286117893s] Aug 27 14:56:42.969: INFO: Created: latency-svc-hhk65 Aug 27 14:56:43.001: INFO: Got endpoints: latency-svc-hhk65 [2.330895601s] Aug 27 14:56:43.076: INFO: Created: latency-svc-h8cvt Aug 27 14:56:43.079: INFO: Got endpoints: latency-svc-h8cvt [2.342079574s] Aug 27 14:56:43.112: INFO: Created: latency-svc-cp9xn Aug 27 14:56:43.136: INFO: Got endpoints: latency-svc-cp9xn [2.307898128s] Aug 27 14:56:43.166: INFO: Created: latency-svc-5v8fj Aug 27 14:56:43.225: INFO: Got endpoints: latency-svc-5v8fj [2.318492421s] Aug 27 14:56:43.253: INFO: Created: latency-svc-4lgrx Aug 27 14:56:43.272: INFO: Got endpoints: latency-svc-4lgrx [2.21530135s] Aug 27 14:56:43.364: INFO: Created: latency-svc-8k4p8 Aug 27 14:56:43.374: INFO: Got endpoints: latency-svc-8k4p8 [2.233130068s] Aug 27 14:56:43.422: INFO: Created: latency-svc-lk92g Aug 27 14:56:43.447: INFO: Got endpoints: latency-svc-lk92g [2.23174275s] Aug 27 14:56:43.776: INFO: Created: latency-svc-4w7x6 Aug 27 14:56:43.780: INFO: Got endpoints: latency-svc-4w7x6 [2.500670595s] Aug 27 14:56:43.870: INFO: Created: latency-svc-qvx9p Aug 27 14:56:43.968: INFO: Got endpoints: latency-svc-qvx9p [2.574661513s] Aug 27 14:56:43.973: INFO: Created: latency-svc-b9pd9 Aug 27 14:56:44.207: INFO: Got endpoints: latency-svc-b9pd9 [2.308906575s] Aug 27 14:56:44.405: INFO: Created: latency-svc-d2rll Aug 27 14:56:44.436: INFO: Got endpoints: latency-svc-d2rll [2.078936136s] Aug 27 14:56:44.603: INFO: Created: latency-svc-twnp8 Aug 27 14:56:44.610: INFO: Got endpoints: latency-svc-twnp8 [1.951850339s] Aug 27 14:56:44.649: INFO: Created: latency-svc-nxhlg Aug 27 14:56:44.725: INFO: Got endpoints: latency-svc-nxhlg [1.967271019s] Aug 27 14:56:44.766: INFO: Created: latency-svc-7kfrr Aug 27 14:56:44.797: INFO: Got endpoints: latency-svc-7kfrr [1.970602403s] Aug 27 14:56:44.884: INFO: Created: latency-svc-sg5lz Aug 27 14:56:44.900: INFO: Got endpoints: latency-svc-sg5lz [1.980475804s] Aug 27 14:56:44.901: INFO: Created: latency-svc-nbgfh Aug 27 14:56:44.933: INFO: Got endpoints: latency-svc-nbgfh [1.931440495s] Aug 27 14:56:45.034: INFO: Created: latency-svc-f2sfr Aug 27 14:56:45.038: INFO: Got endpoints: latency-svc-f2sfr [1.958924472s] Aug 27 14:56:45.099: INFO: Created: latency-svc-vs9rg Aug 27 14:56:45.122: INFO: Got endpoints: latency-svc-vs9rg [1.985138518s] Aug 27 14:56:45.169: INFO: Created: latency-svc-jf2b5 Aug 27 14:56:45.209: INFO: Got endpoints: latency-svc-jf2b5 [1.983800766s] Aug 27 14:56:45.261: INFO: Created: latency-svc-kbxv7 Aug 27 14:56:45.321: INFO: Got endpoints: latency-svc-kbxv7 [2.048201541s] Aug 27 14:56:45.341: INFO: Created: latency-svc-flkvs Aug 27 14:56:45.359: INFO: Got endpoints: latency-svc-flkvs [1.985260871s] Aug 27 14:56:45.413: INFO: Created: latency-svc-57cf5 Aug 27 14:56:45.470: INFO: Got endpoints: latency-svc-57cf5 [2.02293389s] Aug 27 14:56:45.533: INFO: Created: latency-svc-slsbf Aug 27 14:56:45.591: INFO: Got endpoints: latency-svc-slsbf [1.810404689s] Aug 27 14:56:45.617: INFO: Created: latency-svc-r9588 Aug 27 14:56:45.628: INFO: Got endpoints: latency-svc-r9588 [1.659594934s] Aug 27 14:56:45.651: INFO: Created: latency-svc-8cgfj Aug 27 14:56:45.676: INFO: Got endpoints: latency-svc-8cgfj [1.468626654s] Aug 27 14:56:45.795: INFO: Created: latency-svc-bdhbp Aug 27 14:56:45.802: INFO: Got endpoints: latency-svc-bdhbp [1.365183467s] Aug 27 14:56:45.824: INFO: Created: latency-svc-x9xrq Aug 27 14:56:45.867: INFO: Got endpoints: latency-svc-x9xrq [1.256645576s] Aug 27 14:56:45.950: INFO: Created: latency-svc-xfdc2 Aug 27 14:56:45.959: INFO: Got endpoints: latency-svc-xfdc2 [1.233257169s] Aug 27 14:56:46.023: INFO: Created: latency-svc-gb2q6 Aug 27 14:56:46.049: INFO: Got endpoints: latency-svc-gb2q6 [1.252631894s] Aug 27 14:56:46.143: INFO: Created: latency-svc-dvdr9 Aug 27 14:56:46.176: INFO: Got endpoints: latency-svc-dvdr9 [1.275207035s] Aug 27 14:56:46.218: INFO: Created: latency-svc-k8vqp Aug 27 14:56:46.273: INFO: Got endpoints: latency-svc-k8vqp [1.340035859s] Aug 27 14:56:46.293: INFO: Created: latency-svc-ddfs5 Aug 27 14:56:46.315: INFO: Got endpoints: latency-svc-ddfs5 [1.277268285s] Aug 27 14:56:48.470: INFO: Created: latency-svc-g62f7 Aug 27 14:56:48.486: INFO: Got endpoints: latency-svc-g62f7 [3.363998973s] Aug 27 14:56:48.567: INFO: Created: latency-svc-f2jrj Aug 27 14:56:48.569: INFO: Got endpoints: latency-svc-f2jrj [3.359938719s] Aug 27 14:56:48.820: INFO: Created: latency-svc-bblhw Aug 27 14:56:49.058: INFO: Got endpoints: latency-svc-bblhw [3.736870739s] Aug 27 14:56:49.227: INFO: Created: latency-svc-dvj2g Aug 27 14:56:49.438: INFO: Got endpoints: latency-svc-dvj2g [4.078432374s] Aug 27 14:56:49.813: INFO: Created: latency-svc-fv2dr Aug 27 14:56:49.990: INFO: Got endpoints: latency-svc-fv2dr [4.519427441s] Aug 27 14:56:50.130: INFO: Created: latency-svc-9bjvj Aug 27 14:56:50.134: INFO: Got endpoints: latency-svc-9bjvj [4.543492998s] Aug 27 14:56:50.219: INFO: Created: latency-svc-7ps6m Aug 27 14:56:50.309: INFO: Got endpoints: latency-svc-7ps6m [4.681395498s] Aug 27 14:56:50.350: INFO: Created: latency-svc-km7k4 Aug 27 14:56:50.373: INFO: Got endpoints: latency-svc-km7k4 [4.696446383s] Aug 27 14:56:50.561: INFO: Created: latency-svc-lvx57 Aug 27 14:56:50.627: INFO: Created: latency-svc-jfxtx Aug 27 14:56:50.738: INFO: Got endpoints: latency-svc-lvx57 [4.936286655s] Aug 27 14:56:50.739: INFO: Created: latency-svc-st72d Aug 27 14:56:50.758: INFO: Got endpoints: latency-svc-st72d [4.798740803s] Aug 27 14:56:50.814: INFO: Got endpoints: latency-svc-jfxtx [4.947001621s] Aug 27 14:56:50.818: INFO: Created: latency-svc-n9drw Aug 27 14:56:50.860: INFO: Got endpoints: latency-svc-n9drw [4.810519571s] Aug 27 14:56:50.895: INFO: Created: latency-svc-fx7fd Aug 27 14:56:50.914: INFO: Got endpoints: latency-svc-fx7fd [4.737977321s] Aug 27 14:56:50.937: INFO: Created: latency-svc-2tzxk Aug 27 14:56:50.956: INFO: Got endpoints: latency-svc-2tzxk [4.682809345s] Aug 27 14:56:51.042: INFO: Created: latency-svc-tp7pn Aug 27 14:56:51.071: INFO: Got endpoints: latency-svc-tp7pn [4.755732168s] Aug 27 14:56:51.111: INFO: Created: latency-svc-zzlp9 Aug 27 14:56:51.171: INFO: Got endpoints: latency-svc-zzlp9 [2.685337447s] Aug 27 14:56:51.191: INFO: Created: latency-svc-rshfm Aug 27 14:56:51.209: INFO: Got endpoints: latency-svc-rshfm [2.640271928s] Aug 27 14:56:51.258: INFO: Created: latency-svc-847zc Aug 27 14:56:51.461: INFO: Got endpoints: latency-svc-847zc [2.403101917s] Aug 27 14:56:51.464: INFO: Created: latency-svc-99w9l Aug 27 14:56:51.522: INFO: Got endpoints: latency-svc-99w9l [2.083915436s] Aug 27 14:56:51.621: INFO: Created: latency-svc-vsszj Aug 27 14:56:51.642: INFO: Got endpoints: latency-svc-vsszj [1.65252964s] Aug 27 14:56:51.783: INFO: Created: latency-svc-gmwxz Aug 27 14:56:51.794: INFO: Got endpoints: latency-svc-gmwxz [1.658593781s] Aug 27 14:56:51.950: INFO: Created: latency-svc-q9nfc Aug 27 14:56:51.972: INFO: Got endpoints: latency-svc-q9nfc [1.662937257s] Aug 27 14:56:52.435: INFO: Created: latency-svc-675sc Aug 27 14:56:52.438: INFO: Got endpoints: latency-svc-675sc [2.064801441s] Aug 27 14:56:52.941: INFO: Created: latency-svc-rprcj Aug 27 14:56:53.162: INFO: Got endpoints: latency-svc-rprcj [2.423788816s] Aug 27 14:56:53.202: INFO: Created: latency-svc-cfcll Aug 27 14:56:53.345: INFO: Got endpoints: latency-svc-cfcll [2.587408919s] Aug 27 14:56:53.513: INFO: Created: latency-svc-n7gb5 Aug 27 14:56:53.526: INFO: Got endpoints: latency-svc-n7gb5 [2.712035022s] Aug 27 14:56:53.553: INFO: Created: latency-svc-4bhl9 Aug 27 14:56:53.585: INFO: Got endpoints: latency-svc-4bhl9 [2.724784168s] Aug 27 14:56:53.694: INFO: Created: latency-svc-ztfrg Aug 27 14:56:53.712: INFO: Got endpoints: latency-svc-ztfrg [2.797369798s] Aug 27 14:56:53.743: INFO: Created: latency-svc-zlqkf Aug 27 14:56:53.754: INFO: Got endpoints: latency-svc-zlqkf [2.79817356s] Aug 27 14:56:53.866: INFO: Created: latency-svc-cqdh5 Aug 27 14:56:53.910: INFO: Got endpoints: latency-svc-cqdh5 [2.838920735s] Aug 27 14:56:54.121: INFO: Created: latency-svc-2dv2l Aug 27 14:56:54.477: INFO: Got endpoints: latency-svc-2dv2l [3.305620116s] Aug 27 14:56:54.517: INFO: Created: latency-svc-tpls4 Aug 27 14:56:54.565: INFO: Got endpoints: latency-svc-tpls4 [3.354874898s] Aug 27 14:56:55.000: INFO: Created: latency-svc-8mxgb Aug 27 14:56:55.008: INFO: Got endpoints: latency-svc-8mxgb [3.546427446s] Aug 27 14:56:55.126: INFO: Created: latency-svc-s7qj2 Aug 27 14:56:55.128: INFO: Got endpoints: latency-svc-s7qj2 [3.605256578s] Aug 27 14:56:55.162: INFO: Created: latency-svc-59fn4 Aug 27 14:56:55.190: INFO: Got endpoints: latency-svc-59fn4 [3.547910781s] Aug 27 14:56:55.309: INFO: Created: latency-svc-72c6c Aug 27 14:56:55.321: INFO: Got endpoints: latency-svc-72c6c [3.526089875s] Aug 27 14:56:55.373: INFO: Created: latency-svc-6qv2n Aug 27 14:56:55.381: INFO: Got endpoints: latency-svc-6qv2n [3.408295466s] Aug 27 14:56:55.517: INFO: Created: latency-svc-9259m Aug 27 14:56:55.579: INFO: Got endpoints: latency-svc-9259m [3.141006639s] Aug 27 14:56:55.681: INFO: Created: latency-svc-ckd5b Aug 27 14:56:56.159: INFO: Got endpoints: latency-svc-ckd5b [2.99676236s] Aug 27 14:56:56.162: INFO: Created: latency-svc-wthn9 Aug 27 14:56:56.393: INFO: Got endpoints: latency-svc-wthn9 [3.047841467s] Aug 27 14:56:56.475: INFO: Created: latency-svc-2fdcs Aug 27 14:56:56.597: INFO: Got endpoints: latency-svc-2fdcs [3.070407338s] Aug 27 14:56:57.032: INFO: Created: latency-svc-bm96b Aug 27 14:56:57.214: INFO: Got endpoints: latency-svc-bm96b [3.628263772s] Aug 27 14:56:57.387: INFO: Created: latency-svc-8qwtb Aug 27 14:56:57.433: INFO: Got endpoints: latency-svc-8qwtb [3.720901979s] Aug 27 14:56:57.555: INFO: Created: latency-svc-th5qj Aug 27 14:56:57.558: INFO: Got endpoints: latency-svc-th5qj [3.803255355s] Aug 27 14:56:57.640: INFO: Created: latency-svc-s6k9w Aug 27 14:56:57.720: INFO: Got endpoints: latency-svc-s6k9w [3.809016158s] Aug 27 14:56:57.789: INFO: Created: latency-svc-vf8fn Aug 27 14:56:57.793: INFO: Got endpoints: latency-svc-vf8fn [3.315678595s] Aug 27 14:56:57.866: INFO: Created: latency-svc-h56g6 Aug 27 14:56:57.869: INFO: Got endpoints: latency-svc-h56g6 [3.304550156s] Aug 27 14:56:57.923: INFO: Created: latency-svc-sd7c4 Aug 27 14:56:57.937: INFO: Got endpoints: latency-svc-sd7c4 [2.929407974s] Aug 27 14:56:57.966: INFO: Created: latency-svc-6ghkn Aug 27 14:56:58.016: INFO: Got endpoints: latency-svc-6ghkn [2.888162749s] Aug 27 14:56:58.022: INFO: Created: latency-svc-lftpl Aug 27 14:56:58.040: INFO: Got endpoints: latency-svc-lftpl [2.849204746s] Aug 27 14:56:58.066: INFO: Created: latency-svc-ppdgd Aug 27 14:56:58.077: INFO: Got endpoints: latency-svc-ppdgd [2.756214792s] Aug 27 14:56:58.105: INFO: Created: latency-svc-s6x4t Aug 27 14:56:58.159: INFO: Got endpoints: latency-svc-s6x4t [2.77821304s] Aug 27 14:56:58.175: INFO: Created: latency-svc-qp6mt Aug 27 14:56:58.192: INFO: Got endpoints: latency-svc-qp6mt [2.612648817s] Aug 27 14:56:58.211: INFO: Created: latency-svc-8khct Aug 27 14:56:58.228: INFO: Got endpoints: latency-svc-8khct [2.068274475s] Aug 27 14:56:58.251: INFO: Created: latency-svc-xzslx Aug 27 14:56:58.309: INFO: Got endpoints: latency-svc-xzslx [1.915182797s] Aug 27 14:56:58.316: INFO: Created: latency-svc-xpw64 Aug 27 14:56:58.337: INFO: Got endpoints: latency-svc-xpw64 [1.740075156s] Aug 27 14:56:59.615: INFO: Created: latency-svc-fvjjj Aug 27 14:56:59.711: INFO: Got endpoints: latency-svc-fvjjj [2.496762171s] Aug 27 14:56:59.741: INFO: Created: latency-svc-n9xz6 Aug 27 14:56:59.757: INFO: Got endpoints: latency-svc-n9xz6 [2.324167984s] Aug 27 14:56:59.866: INFO: Created: latency-svc-d7ldl Aug 27 14:56:59.869: INFO: Got endpoints: latency-svc-d7ldl [2.310870171s] Aug 27 14:56:59.950: INFO: Created: latency-svc-nm9b7 Aug 27 14:56:59.961: INFO: Got endpoints: latency-svc-nm9b7 [2.24159608s] Aug 27 14:57:00.048: INFO: Created: latency-svc-mn95b Aug 27 14:57:00.064: INFO: Got endpoints: latency-svc-mn95b [2.270509938s] Aug 27 14:57:00.111: INFO: Created: latency-svc-gghlx Aug 27 14:57:00.195: INFO: Got endpoints: latency-svc-gghlx [2.325727236s] Aug 27 14:57:01.421: INFO: Created: latency-svc-2n8lb Aug 27 14:57:01.494: INFO: Created: latency-svc-c4cvr Aug 27 14:57:01.515: INFO: Got endpoints: latency-svc-c4cvr [3.498493646s] Aug 27 14:57:01.516: INFO: Got endpoints: latency-svc-2n8lb [3.578367517s] Aug 27 14:57:01.978: INFO: Created: latency-svc-88g4k Aug 27 14:57:01.994: INFO: Got endpoints: latency-svc-88g4k [3.953715965s] Aug 27 14:57:02.173: INFO: Created: latency-svc-br7zz Aug 27 14:57:02.186: INFO: Got endpoints: latency-svc-br7zz [4.108663686s] Aug 27 14:57:02.257: INFO: Created: latency-svc-txnmz Aug 27 14:57:02.293: INFO: Got endpoints: latency-svc-txnmz [4.133636498s] Aug 27 14:57:02.323: INFO: Created: latency-svc-t2xjp Aug 27 14:57:02.337: INFO: Got endpoints: latency-svc-t2xjp [4.145000628s] Aug 27 14:57:02.423: INFO: Created: latency-svc-jj99l Aug 27 14:57:02.656: INFO: Got endpoints: latency-svc-jj99l [4.427861784s] Aug 27 14:57:02.737: INFO: Created: latency-svc-5vt98 Aug 27 14:57:02.884: INFO: Got endpoints: latency-svc-5vt98 [4.574390721s] Aug 27 14:57:02.922: INFO: Created: latency-svc-r5gt6 Aug 27 14:57:02.943: INFO: Got endpoints: latency-svc-r5gt6 [4.606124172s] Aug 27 14:57:03.117: INFO: Created: latency-svc-w7b5x Aug 27 14:57:03.120: INFO: Got endpoints: latency-svc-w7b5x [3.40873765s] Aug 27 14:57:04.358: INFO: Created: latency-svc-9x8fv Aug 27 14:57:04.393: INFO: Got endpoints: latency-svc-9x8fv [4.63609847s] Aug 27 14:57:05.856: INFO: Created: latency-svc-cl9zb Aug 27 14:57:05.933: INFO: Got endpoints: latency-svc-cl9zb [6.063742025s] Aug 27 14:57:06.082: INFO: Created: latency-svc-7pwhz Aug 27 14:57:06.107: INFO: Got endpoints: latency-svc-7pwhz [6.145584987s] Aug 27 14:57:06.176: INFO: Created: latency-svc-7sx2d Aug 27 14:57:07.321: INFO: Got endpoints: latency-svc-7sx2d [7.257611707s] Aug 27 14:57:07.580: INFO: Created: latency-svc-6q9gz Aug 27 14:57:07.659: INFO: Got endpoints: latency-svc-6q9gz [7.463613465s] Aug 27 14:57:07.892: INFO: Created: latency-svc-wq487 Aug 27 14:57:07.943: INFO: Got endpoints: latency-svc-wq487 [6.42808445s] Aug 27 14:57:08.214: INFO: Created: latency-svc-8n5jg Aug 27 14:57:08.248: INFO: Got endpoints: latency-svc-8n5jg [6.731824551s] Aug 27 14:57:08.510: INFO: Created: latency-svc-m496d Aug 27 14:57:08.728: INFO: Got endpoints: latency-svc-m496d [6.734287078s] Aug 27 14:57:08.792: INFO: Created: latency-svc-9z2nk Aug 27 14:57:08.811: INFO: Got endpoints: latency-svc-9z2nk [6.625128378s] Aug 27 14:57:08.938: INFO: Created: latency-svc-7xdfk Aug 27 14:57:08.965: INFO: Got endpoints: latency-svc-7xdfk [6.671751051s] Aug 27 14:57:09.003: INFO: Created: latency-svc-9n5jh Aug 27 14:57:09.009: INFO: Got endpoints: latency-svc-9n5jh [6.67242196s] Aug 27 14:57:09.154: INFO: Created: latency-svc-rl5sl Aug 27 14:57:09.157: INFO: Got endpoints: latency-svc-rl5sl [6.500407772s] Aug 27 14:57:09.431: INFO: Created: latency-svc-7c2n5 Aug 27 14:57:09.651: INFO: Got endpoints: latency-svc-7c2n5 [6.766931066s] Aug 27 14:57:09.700: INFO: Created: latency-svc-8bw6z Aug 27 14:57:09.735: INFO: Got endpoints: latency-svc-8bw6z [6.79193524s] Aug 27 14:57:10.037: INFO: Created: latency-svc-cxsz9 Aug 27 14:57:10.039: INFO: Got endpoints: latency-svc-cxsz9 [6.919380209s] Aug 27 14:57:10.328: INFO: Created: latency-svc-x4284 Aug 27 14:57:10.389: INFO: Got endpoints: latency-svc-x4284 [5.995634725s] Aug 27 14:57:10.390: INFO: Created: latency-svc-jsmdr Aug 27 14:57:10.420: INFO: Got endpoints: latency-svc-jsmdr [4.486707011s] Aug 27 14:57:10.717: INFO: Created: latency-svc-9fj9w Aug 27 14:57:10.768: INFO: Got endpoints: latency-svc-9fj9w [4.660633441s] Aug 27 14:57:10.947: INFO: Created: latency-svc-gpkhc Aug 27 14:57:10.999: INFO: Got endpoints: latency-svc-gpkhc [3.677476726s] Aug 27 14:57:11.070: INFO: Created: latency-svc-8d6j9 Aug 27 14:57:11.098: INFO: Got endpoints: latency-svc-8d6j9 [3.438441799s] Aug 27 14:57:11.154: INFO: Created: latency-svc-tvb9x Aug 27 14:57:11.447: INFO: Got endpoints: latency-svc-tvb9x [3.504302684s] Aug 27 14:57:11.451: INFO: Created: latency-svc-lml9p Aug 27 14:57:11.476: INFO: Got endpoints: latency-svc-lml9p [3.227741623s] Aug 27 14:57:11.675: INFO: Created: latency-svc-vcpzl Aug 27 14:57:11.750: INFO: Created: latency-svc-hkdmf Aug 27 14:57:11.868: INFO: Got endpoints: latency-svc-vcpzl [3.139772675s] Aug 27 14:57:11.890: INFO: Got endpoints: latency-svc-hkdmf [3.078865299s] Aug 27 14:57:11.938: INFO: Created: latency-svc-vrg68 Aug 27 14:57:11.945: INFO: Got endpoints: latency-svc-vrg68 [2.9797259s] Aug 27 14:57:12.034: INFO: Created: latency-svc-xsnz6 Aug 27 14:57:12.100: INFO: Got endpoints: latency-svc-xsnz6 [3.090958051s] Aug 27 14:57:12.389: INFO: Created: latency-svc-l98px Aug 27 14:57:12.753: INFO: Got endpoints: latency-svc-l98px [3.596114154s] Aug 27 14:57:12.792: INFO: Created: latency-svc-xhfhh Aug 27 14:57:12.808: INFO: Got endpoints: latency-svc-xhfhh [3.157023929s] Aug 27 14:57:13.939: INFO: Created: latency-svc-f8cpn Aug 27 14:57:13.941: INFO: Got endpoints: latency-svc-f8cpn [4.205538457s] Aug 27 14:57:13.990: INFO: Created: latency-svc-mlwbf Aug 27 14:57:14.009: INFO: Got endpoints: latency-svc-mlwbf [3.969826835s] Aug 27 14:57:15.461: INFO: Created: latency-svc-qczcw Aug 27 14:57:15.480: INFO: Got endpoints: latency-svc-qczcw [5.091157912s] Aug 27 14:57:15.541: INFO: Created: latency-svc-jm5bd Aug 27 14:57:15.908: INFO: Got endpoints: latency-svc-jm5bd [5.488682857s] Aug 27 14:57:15.910: INFO: Created: latency-svc-4z5hf Aug 27 14:57:16.130: INFO: Got endpoints: latency-svc-4z5hf [5.361607523s] Aug 27 14:57:16.228: INFO: Created: latency-svc-snbnb Aug 27 14:57:16.309: INFO: Got endpoints: latency-svc-snbnb [5.309841548s] Aug 27 14:57:16.318: INFO: Created: latency-svc-rxzb5 Aug 27 14:57:16.357: INFO: Got endpoints: latency-svc-rxzb5 [5.258800234s] Aug 27 14:57:16.839: INFO: Created: latency-svc-x9mv5 Aug 27 14:57:16.843: INFO: Got endpoints: latency-svc-x9mv5 [5.395618786s] Aug 27 14:57:17.355: INFO: Created: latency-svc-qkfqx Aug 27 14:57:17.374: INFO: Got endpoints: latency-svc-qkfqx [5.89788957s] Aug 27 14:57:17.519: INFO: Created: latency-svc-xf6fk Aug 27 14:57:17.533: INFO: Got endpoints: latency-svc-xf6fk [5.664428525s] Aug 27 14:57:18.646: INFO: Created: latency-svc-vklz5 Aug 27 14:57:18.722: INFO: Got endpoints: latency-svc-vklz5 [6.831938524s] Aug 27 14:57:18.735: INFO: Created: latency-svc-kgjdv Aug 27 14:57:18.786: INFO: Got endpoints: latency-svc-kgjdv [6.840221271s] Aug 27 14:57:19.040: INFO: Created: latency-svc-bxzbg Aug 27 14:57:19.219: INFO: Got endpoints: latency-svc-bxzbg [7.118606865s] Aug 27 14:57:19.234: INFO: Created: latency-svc-z8qsp Aug 27 14:57:19.267: INFO: Got endpoints: latency-svc-z8qsp [6.513698838s] Aug 27 14:57:19.489: INFO: Created: latency-svc-dfzsv Aug 27 14:57:19.493: INFO: Got endpoints: latency-svc-dfzsv [6.684733367s] Aug 27 14:57:19.543: INFO: Created: latency-svc-kkw8k Aug 27 14:57:19.559: INFO: Got endpoints: latency-svc-kkw8k [5.617724059s] Aug 27 14:57:19.675: INFO: Created: latency-svc-jw27l Aug 27 14:57:19.691: INFO: Got endpoints: latency-svc-jw27l [5.682078329s] Aug 27 14:57:19.717: INFO: Created: latency-svc-l72ss Aug 27 14:57:19.734: INFO: Got endpoints: latency-svc-l72ss [4.253183141s] Aug 27 14:57:19.759: INFO: Created: latency-svc-r2x94 Aug 27 14:57:19.824: INFO: Got endpoints: latency-svc-r2x94 [3.915398046s] Aug 27 14:57:19.834: INFO: Created: latency-svc-25wsz Aug 27 14:57:19.849: INFO: Got endpoints: latency-svc-25wsz [3.718657508s] Aug 27 14:57:19.873: INFO: Created: latency-svc-nx8qm Aug 27 14:57:19.909: INFO: Got endpoints: latency-svc-nx8qm [3.599385882s] Aug 27 14:57:19.921: INFO: Created: latency-svc-l5f2f Aug 27 14:57:19.981: INFO: Got endpoints: latency-svc-l5f2f [3.624559256s] Aug 27 14:57:19.994: INFO: Created: latency-svc-2nv97 Aug 27 14:57:20.012: INFO: Got endpoints: latency-svc-2nv97 [3.168252465s] Aug 27 14:57:20.034: INFO: Created: latency-svc-xhfsc Aug 27 14:57:20.056: INFO: Got endpoints: latency-svc-xhfsc [2.682212273s] Aug 27 14:57:20.143: INFO: Created: latency-svc-fbthd Aug 27 14:57:20.145: INFO: Got endpoints: latency-svc-fbthd [2.611783672s] Aug 27 14:57:20.235: INFO: Created: latency-svc-q96wj Aug 27 14:57:20.328: INFO: Got endpoints: latency-svc-q96wj [1.605439296s] Aug 27 14:57:20.332: INFO: Created: latency-svc-5cb4s Aug 27 14:57:20.349: INFO: Got endpoints: latency-svc-5cb4s [1.563416588s] Aug 27 14:57:20.377: INFO: Created: latency-svc-cdzdq Aug 27 14:57:20.392: INFO: Got endpoints: latency-svc-cdzdq [1.172080141s] Aug 27 14:57:20.501: INFO: Created: latency-svc-rg9dv Aug 27 14:57:20.504: INFO: Got endpoints: latency-svc-rg9dv [1.23653838s] Aug 27 14:57:20.557: INFO: Created: latency-svc-b29zf Aug 27 14:57:20.669: INFO: Got endpoints: latency-svc-b29zf [1.175770221s] Aug 27 14:57:20.675: INFO: Created: latency-svc-56hhv Aug 27 14:57:20.680: INFO: Got endpoints: latency-svc-56hhv [1.120767054s] Aug 27 14:57:20.709: INFO: Created: latency-svc-czzd7 Aug 27 14:57:20.723: INFO: Got endpoints: latency-svc-czzd7 [1.031410269s] Aug 27 14:57:20.755: INFO: Created: latency-svc-v2mhm Aug 27 14:57:20.837: INFO: Got endpoints: latency-svc-v2mhm [1.102475484s] Aug 27 14:57:20.839: INFO: Created: latency-svc-6w6fp Aug 27 14:57:20.850: INFO: Got endpoints: latency-svc-6w6fp [1.026180714s] Aug 27 14:57:20.890: INFO: Created: latency-svc-rlxz7 Aug 27 14:57:20.922: INFO: Got endpoints: latency-svc-rlxz7 [1.07313343s] Aug 27 14:57:20.999: INFO: Created: latency-svc-wqgq9 Aug 27 14:57:21.007: INFO: Got endpoints: latency-svc-wqgq9 [1.098066481s] Aug 27 14:57:21.046: INFO: Created: latency-svc-j5j52 Aug 27 14:57:21.061: INFO: Got endpoints: latency-svc-j5j52 [1.079345733s] Aug 27 14:57:21.160: INFO: Created: latency-svc-9qn2f Aug 27 14:57:21.162: INFO: Got endpoints: latency-svc-9qn2f [1.149888166s] Aug 27 14:57:21.205: INFO: Created: latency-svc-wbkl2 Aug 27 14:57:21.241: INFO: Got endpoints: latency-svc-wbkl2 [1.184359896s] Aug 27 14:57:21.328: INFO: Created: latency-svc-nwr6c Aug 27 14:57:21.329: INFO: Got endpoints: latency-svc-nwr6c [1.183901169s] Aug 27 14:57:21.370: INFO: Created: latency-svc-jrlfj Aug 27 14:57:21.386: INFO: Got endpoints: latency-svc-jrlfj [1.058099549s] Aug 27 14:57:21.501: INFO: Created: latency-svc-lj292 Aug 27 14:57:21.519: INFO: Got endpoints: latency-svc-lj292 [1.169437177s] Aug 27 14:57:21.565: INFO: Created: latency-svc-8j4df Aug 27 14:57:21.591: INFO: Got endpoints: latency-svc-8j4df [1.198902094s] Aug 27 14:57:21.664: INFO: Created: latency-svc-xxrzg Aug 27 14:57:21.699: INFO: Got endpoints: latency-svc-xxrzg [1.194952987s] Aug 27 14:57:21.740: INFO: Created: latency-svc-pk9wz Aug 27 14:57:21.748: INFO: Got endpoints: latency-svc-pk9wz [1.078662834s] Aug 27 14:57:22.047: INFO: Created: latency-svc-xfspl Aug 27 14:57:22.291: INFO: Created: latency-svc-dth5c Aug 27 14:57:22.319: INFO: Got endpoints: latency-svc-xfspl [1.639474768s] Aug 27 14:57:22.412: INFO: Got endpoints: latency-svc-dth5c [1.689086679s] Aug 27 14:57:22.427: INFO: Created: latency-svc-qjsgm Aug 27 14:57:22.456: INFO: Got endpoints: latency-svc-qjsgm [1.618756772s] Aug 27 14:57:22.879: INFO: Created: latency-svc-4vtdd Aug 27 14:57:22.881: INFO: Got endpoints: latency-svc-4vtdd [2.03014764s] Aug 27 14:57:22.939: INFO: Created: latency-svc-976pm Aug 27 14:57:22.953: INFO: Got endpoints: latency-svc-976pm [2.03104896s] Aug 27 14:57:22.954: INFO: Latencies: [56.036065ms 121.555321ms 164.420534ms 200.179815ms 266.80496ms 359.160431ms 437.34851ms 587.722537ms 671.465242ms 746.27516ms 810.16318ms 923.453873ms 1.026180714s 1.031410269s 1.058099549s 1.07313343s 1.078662834s 1.079345733s 1.098066481s 1.102475484s 1.120767054s 1.149888166s 1.169437177s 1.172080141s 1.175770221s 1.183901169s 1.184359896s 1.194952987s 1.198902094s 1.233257169s 1.23653838s 1.252631894s 1.256645576s 1.275207035s 1.277268285s 1.340035859s 1.365183467s 1.428365003s 1.468626654s 1.563416588s 1.605439296s 1.618756772s 1.639474768s 1.65252964s 1.658593781s 1.659594934s 1.662937257s 1.689086679s 1.740075156s 1.810404689s 1.888155327s 1.915182797s 1.931440495s 1.951850339s 1.958924472s 1.967271019s 1.970602403s 1.980475804s 1.983800766s 1.985138518s 1.985260871s 2.02293389s 2.03014764s 2.03104896s 2.048201541s 2.064801441s 2.068274475s 2.078936136s 2.083915436s 2.188657833s 2.21530135s 2.23174275s 2.232867776s 2.233130068s 2.23522051s 2.24159608s 2.270509938s 2.286117893s 2.307898128s 2.308906575s 2.310870171s 2.318492421s 2.324167984s 2.325727236s 2.330895601s 2.342079574s 2.403101917s 2.423788816s 2.496762171s 2.500670595s 2.574661513s 2.587408919s 2.611783672s 2.612648817s 2.640271928s 2.682212273s 2.685337447s 2.712035022s 2.724784168s 2.756214792s 2.77821304s 2.797369798s 2.79817356s 2.838920735s 2.849204746s 2.888162749s 2.929407974s 2.9797259s 2.99676236s 3.047841467s 3.070407338s 3.078865299s 3.090958051s 3.139772675s 3.141006639s 3.157023929s 3.168252465s 3.227741623s 3.304550156s 3.305620116s 3.315678595s 3.354874898s 3.359938719s 3.363998973s 3.408295466s 3.40873765s 3.438441799s 3.498493646s 3.504302684s 3.526089875s 3.546427446s 3.547910781s 3.578367517s 3.596114154s 3.599385882s 3.605256578s 3.624559256s 3.628263772s 3.677476726s 3.718657508s 3.720901979s 3.736870739s 3.803255355s 3.809016158s 3.915398046s 3.953715965s 3.969826835s 4.078432374s 4.108663686s 4.133636498s 4.145000628s 4.205538457s 4.253183141s 4.427861784s 4.486707011s 4.519427441s 4.543492998s 4.574390721s 4.606124172s 4.63609847s 4.660633441s 4.681395498s 4.682809345s 4.696446383s 4.737977321s 4.755732168s 4.798740803s 4.810519571s 4.936286655s 4.947001621s 5.091157912s 5.258800234s 5.309841548s 5.361607523s 5.395618786s 5.488682857s 5.617724059s 5.664428525s 5.682078329s 5.89788957s 5.995634725s 6.063742025s 6.145584987s 6.42808445s 6.500407772s 6.513698838s 6.625128378s 6.671751051s 6.67242196s 6.684733367s 6.731824551s 6.734287078s 6.766931066s 6.79193524s 6.831938524s 6.840221271s 6.919380209s 7.118606865s 7.257611707s 7.463613465s] Aug 27 14:57:22.955: INFO: 50 %ile: 2.77821304s Aug 27 14:57:22.956: INFO: 90 %ile: 5.995634725s Aug 27 14:57:22.956: INFO: 99 %ile: 7.257611707s Aug 27 14:57:22.956: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:57:22.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-gmfb4" for this suite. Aug 27 14:58:41.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:58:41.124: INFO: namespace: e2e-tests-svc-latency-gmfb4, resource: bindings, ignored listing per whitelist Aug 27 14:58:41.409: INFO: namespace e2e-tests-svc-latency-gmfb4 deletion completed in 1m18.353483776s • [SLOW TEST:125.336 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:58:41.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:58:41.581: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-z9794" to be "success or failure" Aug 27 14:58:41.601: INFO: Pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.042051ms Aug 27 14:58:43.711: INFO: Pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129576385s Aug 27 14:58:45.731: INFO: Pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149656608s Aug 27 14:58:47.869: INFO: Pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287223128s STEP: Saw pod success Aug 27 14:58:47.869: INFO: Pod "downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:58:47.874: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:58:47.934: INFO: Waiting for pod downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:58:48.491: INFO: Pod downwardapi-volume-cbc59619-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:58:48.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-z9794" for this suite. Aug 27 14:58:54.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:58:54.969: INFO: namespace: e2e-tests-downward-api-z9794, resource: bindings, ignored listing per whitelist Aug 27 14:58:54.992: INFO: namespace e2e-tests-downward-api-z9794 deletion completed in 6.180729s • [SLOW TEST:13.583 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:58:54.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 14:58:55.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-6gvz2" to be "success or failure" Aug 27 14:58:55.432: INFO: Pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 125.305984ms Aug 27 14:58:57.452: INFO: Pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145548416s Aug 27 14:58:59.472: INFO: Pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166218685s Aug 27 14:59:01.480: INFO: Pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17364021s STEP: Saw pod success Aug 27 14:59:01.480: INFO: Pod "downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:59:01.485: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 14:59:01.523: INFO: Waiting for pod downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:59:01.581: INFO: Pod downwardapi-volume-d3efa2a8-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:59:01.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6gvz2" for this suite. Aug 27 14:59:07.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:59:08.242: INFO: namespace: e2e-tests-projected-6gvz2, resource: bindings, ignored listing per whitelist Aug 27 14:59:08.300: INFO: namespace e2e-tests-projected-6gvz2 deletion completed in 6.707240568s • [SLOW TEST:13.306 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:59:08.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-dbcd29d8-e875-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 14:59:08.480: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-fzxv5" to be "success or failure" Aug 27 14:59:08.611: INFO: Pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 131.31726ms Aug 27 14:59:10.617: INFO: Pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136924478s Aug 27 14:59:12.621: INFO: Pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141024245s Aug 27 14:59:14.625: INFO: Pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145148386s STEP: Saw pod success Aug 27 14:59:14.625: INFO: Pod "pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 14:59:14.629: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 14:59:14.972: INFO: Waiting for pod pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d to disappear Aug 27 14:59:15.064: INFO: Pod pod-projected-configmaps-dbcdd66d-e875-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 14:59:15.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fzxv5" for this suite. Aug 27 14:59:21.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 14:59:21.307: INFO: namespace: e2e-tests-projected-fzxv5, resource: bindings, ignored listing per whitelist Aug 27 14:59:21.344: INFO: namespace e2e-tests-projected-fzxv5 deletion completed in 6.273014645s • [SLOW TEST:13.043 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 14:59:21.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 27 14:59:23.324: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654176,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 14:59:23.324: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654176,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 27 14:59:33.335: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654194,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 27 14:59:33.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654194,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 27 14:59:43.348: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654214,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 14:59:43.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654214,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 27 14:59:53.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654234,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 27 14:59:53.423: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-a,UID:e4a1aa86-e875-11ea-a485-0242ac120004,ResourceVersion:2654234,Generation:0,CreationTimestamp:2020-08-27 14:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 27 15:00:03.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-b,UID:fc8ed57b-e875-11ea-a485-0242ac120004,ResourceVersion:2654254,Generation:0,CreationTimestamp:2020-08-27 15:00:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 15:00:03.431: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-b,UID:fc8ed57b-e875-11ea-a485-0242ac120004,ResourceVersion:2654254,Generation:0,CreationTimestamp:2020-08-27 15:00:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 27 15:00:13.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-b,UID:fc8ed57b-e875-11ea-a485-0242ac120004,ResourceVersion:2654273,Generation:0,CreationTimestamp:2020-08-27 15:00:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 27 15:00:13.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-vxtx5,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxtx5/configmaps/e2e-watch-test-configmap-b,UID:fc8ed57b-e875-11ea-a485-0242ac120004,ResourceVersion:2654273,Generation:0,CreationTimestamp:2020-08-27 15:00:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 15:00:23.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vxtx5" for this suite. Aug 27 15:00:30.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 15:00:30.289: INFO: namespace: e2e-tests-watch-vxtx5, resource: bindings, ignored listing per whitelist Aug 27 15:00:30.409: INFO: namespace e2e-tests-watch-vxtx5 deletion completed in 6.482788583s • [SLOW TEST:69.063 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 15:00:30.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Aug 27 15:00:31.086: INFO: created pod pod-service-account-defaultsa Aug 27 15:00:31.086: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 27 15:00:31.254: INFO: created pod pod-service-account-mountsa Aug 27 15:00:31.254: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 27 15:00:31.332: INFO: created pod pod-service-account-nomountsa Aug 27 15:00:31.333: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 27 15:00:32.238: INFO: created pod pod-service-account-defaultsa-mountspec Aug 27 15:00:32.238: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 27 15:00:32.401: INFO: created pod pod-service-account-mountsa-mountspec Aug 27 15:00:32.401: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 27 15:00:32.774: INFO: created pod pod-service-account-nomountsa-mountspec Aug 27 15:00:32.774: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 27 15:00:32.795: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 27 15:00:32.795: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 27 15:00:32.870: INFO: created pod pod-service-account-mountsa-nomountspec Aug 27 15:00:32.870: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 27 15:00:32.960: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 27 15:00:32.960: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 15:00:32.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-wjhkm" for this suite. Aug 27 15:01:15.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 15:01:16.094: INFO: namespace: e2e-tests-svcaccounts-wjhkm, resource: bindings, ignored listing per whitelist Aug 27 15:01:16.099: INFO: namespace e2e-tests-svcaccounts-wjhkm deletion completed in 43.080403668s • [SLOW TEST:45.689 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 15:01:16.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-rwdlf/configmap-test-284e5953-e876-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 15:01:16.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-rwdlf" to be "success or failure" Aug 27 15:01:17.015: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.952097ms Aug 27 15:01:19.022: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044606753s Aug 27 15:01:21.028: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050441761s Aug 27 15:01:23.657: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679364141s Aug 27 15:01:25.664: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686291343s Aug 27 15:01:27.894: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.916158025s Aug 27 15:01:30.021: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.043600409s Aug 27 15:01:32.050: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.072955851s STEP: Saw pod success Aug 27 15:01:32.051: INFO: Pod "pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 15:01:32.633: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d container env-test: STEP: delete the pod Aug 27 15:01:34.357: INFO: Waiting for pod pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d to disappear Aug 27 15:01:35.014: INFO: Pod pod-configmaps-285cbe23-e876-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 15:01:35.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rwdlf" for this suite. Aug 27 15:01:46.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 15:01:46.114: INFO: namespace: e2e-tests-configmap-rwdlf, resource: bindings, ignored listing per whitelist Aug 27 15:01:46.890: INFO: namespace e2e-tests-configmap-rwdlf deletion completed in 11.296999886s • [SLOW TEST:30.791 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 15:01:46.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 27 15:01:50.247: INFO: Waiting up to 5m0s for pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-mwdpt" to be "success or failure" Aug 27 15:01:50.899: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 651.311844ms Aug 27 15:01:52.907: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.659104491s Aug 27 15:01:54.913: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665487144s Aug 27 15:01:57.097: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849911074s Aug 27 15:01:59.105: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.856995979s Aug 27 15:02:01.145: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.897785989s STEP: Saw pod success Aug 27 15:02:01.146: INFO: Pod "pod-3bbcd84d-e876-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 15:02:01.860: INFO: Trying to get logs from node hunter-worker pod pod-3bbcd84d-e876-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 15:02:02.703: INFO: Waiting for pod pod-3bbcd84d-e876-11ea-894e-0242ac11000d to disappear Aug 27 15:02:02.987: INFO: Pod pod-3bbcd84d-e876-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 15:02:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mwdpt" for this suite. Aug 27 15:02:15.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 15:02:15.268: INFO: namespace: e2e-tests-emptydir-mwdpt, resource: bindings, ignored listing per whitelist Aug 27 15:02:15.369: INFO: namespace e2e-tests-emptydir-mwdpt deletion completed in 12.37140061s • [SLOW TEST:28.474 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 15:02:15.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 15:02:17.857: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-546216db-e876-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:02:30.864: INFO: Waiting up to 5m0s for pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-nxnjx" to be "success or failure"
Aug 27 15:02:31.250: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 386.496297ms
Aug 27 15:02:33.257: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393384331s
Aug 27 15:02:35.536: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.671642709s
Aug 27 15:02:37.542: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677794271s
Aug 27 15:02:39.787: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.922862726s
Aug 27 15:02:42.226: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.361713534s
Aug 27 15:02:44.241: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.37757629s
STEP: Saw pod success
Aug 27 15:02:44.242: INFO: Pod "pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:02:44.859: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d container secret-volume-test: 
STEP: delete the pod
Aug 27 15:02:45.570: INFO: Waiting for pod pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d to disappear
Aug 27 15:02:46.248: INFO: Pod pod-secrets-5466b90f-e876-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:02:46.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nxnjx" for this suite.
Aug 27 15:03:00.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:03:01.279: INFO: namespace: e2e-tests-secrets-nxnjx, resource: bindings, ignored listing per whitelist
Aug 27 15:03:02.084: INFO: namespace e2e-tests-secrets-nxnjx deletion completed in 15.761007771s

• [SLOW TEST:31.928 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:03:02.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Aug 27 15:03:03.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:06.063: INFO: stderr: ""
Aug 27 15:03:06.063: INFO: stdout: "pod/pause created\n"
Aug 27 15:03:06.064: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 27 15:03:06.064: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-95zvm" to be "running and ready"
Aug 27 15:03:06.453: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 388.458542ms
Aug 27 15:03:08.721: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.656784777s
Aug 27 15:03:10.727: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662929689s
Aug 27 15:03:12.901: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.836849087s
Aug 27 15:03:15.002: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937751904s
Aug 27 15:03:17.278: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 11.213533338s
Aug 27 15:03:17.278: INFO: Pod "pause" satisfied condition "running and ready"
Aug 27 15:03:17.278: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 27 15:03:17.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:18.521: INFO: stderr: ""
Aug 27 15:03:18.521: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 27 15:03:18.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:19.935: INFO: stderr: ""
Aug 27 15:03:19.935: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 27 15:03:19.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:21.168: INFO: stderr: ""
Aug 27 15:03:21.168: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 27 15:03:21.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:22.417: INFO: stderr: ""
Aug 27 15:03:22.417: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Aug 27 15:03:22.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:23.683: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 15:03:23.683: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 27 15:03:23.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-95zvm'
Aug 27 15:03:24.979: INFO: stderr: "No resources found.\n"
Aug 27 15:03:24.980: INFO: stdout: ""
Aug 27 15:03:24.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-95zvm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 15:03:26.237: INFO: stderr: ""
Aug 27 15:03:26.238: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:03:26.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-95zvm" for this suite.
Aug 27 15:03:32.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:03:32.466: INFO: namespace: e2e-tests-kubectl-95zvm, resource: bindings, ignored listing per whitelist
Aug 27 15:03:32.483: INFO: namespace e2e-tests-kubectl-95zvm deletion completed in 6.184549851s

• [SLOW TEST:30.398 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:03:32.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-799309b8-e876-11ea-894e-0242ac11000d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:03:39.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lfqzk" for this suite.
Aug 27 15:04:01.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:04:01.656: INFO: namespace: e2e-tests-configmap-lfqzk, resource: bindings, ignored listing per whitelist
Aug 27 15:04:01.668: INFO: namespace e2e-tests-configmap-lfqzk deletion completed in 22.149377452s

• [SLOW TEST:29.184 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:04:01.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 27 15:04:01.942: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2wjwr,SelfLink:/api/v1/namespaces/e2e-tests-watch-2wjwr/configmaps/e2e-watch-test-resource-version,UID:8aa50ad2-e876-11ea-a485-0242ac120004,ResourceVersion:2654925,Generation:0,CreationTimestamp:2020-08-27 15:04:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 15:04:01.943: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2wjwr,SelfLink:/api/v1/namespaces/e2e-tests-watch-2wjwr/configmaps/e2e-watch-test-resource-version,UID:8aa50ad2-e876-11ea-a485-0242ac120004,ResourceVersion:2654926,Generation:0,CreationTimestamp:2020-08-27 15:04:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:04:01.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2wjwr" for this suite.
Aug 27 15:04:07.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:04:08.030: INFO: namespace: e2e-tests-watch-2wjwr, resource: bindings, ignored listing per whitelist
Aug 27 15:04:08.099: INFO: namespace e2e-tests-watch-2wjwr deletion completed in 6.147488704s

• [SLOW TEST:6.429 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:04:08.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 15:04:14.848: INFO: Successfully updated pod "pod-update-8e7edb80-e876-11ea-894e-0242ac11000d"
STEP: verifying the updated pod is in kubernetes
Aug 27 15:04:14.914: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:04:14.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qg8p5" for this suite.
Aug 27 15:04:41.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:04:41.072: INFO: namespace: e2e-tests-pods-qg8p5, resource: bindings, ignored listing per whitelist
Aug 27 15:04:41.174: INFO: namespace e2e-tests-pods-qg8p5 deletion completed in 26.251894362s

• [SLOW TEST:33.073 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:04:41.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Aug 27 15:04:41.636: INFO: Waiting up to 5m0s for pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d" in namespace "e2e-tests-var-expansion-b9m5h" to be "success or failure"
Aug 27 15:04:42.067: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 430.797525ms
Aug 27 15:04:44.074: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.437375058s
Aug 27 15:04:46.079: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442703632s
Aug 27 15:04:48.615: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.978856202s
Aug 27 15:04:50.621: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.985267746s
STEP: Saw pod success
Aug 27 15:04:50.622: INFO: Pod "var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:04:50.625: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 15:04:50.650: INFO: Waiting for pod var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d to disappear
Aug 27 15:04:50.696: INFO: Pod var-expansion-a260b1cc-e876-11ea-894e-0242ac11000d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:04:50.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-b9m5h" for this suite.
Aug 27 15:04:56.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:04:56.905: INFO: namespace: e2e-tests-var-expansion-b9m5h, resource: bindings, ignored listing per whitelist
Aug 27 15:04:56.920: INFO: namespace e2e-tests-var-expansion-b9m5h deletion completed in 6.194182615s

• [SLOW TEST:15.745 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:04:56.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 27 15:04:57.259: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655084,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 15:04:57.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655085,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 27 15:04:57.261: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655086,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 27 15:05:08.006: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655107,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 15:05:08.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655109,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 27 15:05:08.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gvzgr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gvzgr/configmaps/e2e-watch-test-label-changed,UID:aba5f7d2-e876-11ea-a485-0242ac120004,ResourceVersion:2655110,Generation:0,CreationTimestamp:2020-08-27 15:04:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:05:08.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-gvzgr" for this suite.
Aug 27 15:05:16.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:05:16.774: INFO: namespace: e2e-tests-watch-gvzgr, resource: bindings, ignored listing per whitelist
Aug 27 15:05:16.825: INFO: namespace e2e-tests-watch-gvzgr deletion completed in 8.57078028s

• [SLOW TEST:19.904 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:05:16.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-njpvt
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 15:05:18.104: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 15:05:55.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.229:8080/dial?request=hostName&protocol=udp&host=10.244.2.228&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-njpvt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:05:55.268: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:05:55.320921       7 log.go:172] (0x4001cc84d0) (0x4001a1edc0) Create stream
I0827 15:05:55.321092       7 log.go:172] (0x4001cc84d0) (0x4001a1edc0) Stream added, broadcasting: 1
I0827 15:05:55.324096       7 log.go:172] (0x4001cc84d0) Reply frame received for 1
I0827 15:05:55.324321       7 log.go:172] (0x4001cc84d0) (0x4001a1ef00) Create stream
I0827 15:05:55.324412       7 log.go:172] (0x4001cc84d0) (0x4001a1ef00) Stream added, broadcasting: 3
I0827 15:05:55.326150       7 log.go:172] (0x4001cc84d0) Reply frame received for 3
I0827 15:05:55.326321       7 log.go:172] (0x4001cc84d0) (0x4000ff2320) Create stream
I0827 15:05:55.326408       7 log.go:172] (0x4001cc84d0) (0x4000ff2320) Stream added, broadcasting: 5
I0827 15:05:55.327822       7 log.go:172] (0x4001cc84d0) Reply frame received for 5
I0827 15:05:55.400201       7 log.go:172] (0x4001cc84d0) Data frame received for 3
I0827 15:05:55.400364       7 log.go:172] (0x4001a1ef00) (3) Data frame handling
I0827 15:05:55.400480       7 log.go:172] (0x4001a1ef00) (3) Data frame sent
I0827 15:05:55.400577       7 log.go:172] (0x4001cc84d0) Data frame received for 3
I0827 15:05:55.400661       7 log.go:172] (0x4001a1ef00) (3) Data frame handling
I0827 15:05:55.400980       7 log.go:172] (0x4001cc84d0) Data frame received for 5
I0827 15:05:55.401145       7 log.go:172] (0x4000ff2320) (5) Data frame handling
I0827 15:05:55.402457       7 log.go:172] (0x4001cc84d0) Data frame received for 1
I0827 15:05:55.402540       7 log.go:172] (0x4001a1edc0) (1) Data frame handling
I0827 15:05:55.402620       7 log.go:172] (0x4001a1edc0) (1) Data frame sent
I0827 15:05:55.402710       7 log.go:172] (0x4001cc84d0) (0x4001a1edc0) Stream removed, broadcasting: 1
I0827 15:05:55.402814       7 log.go:172] (0x4001cc84d0) Go away received
I0827 15:05:55.403232       7 log.go:172] (0x4001cc84d0) (0x4001a1edc0) Stream removed, broadcasting: 1
I0827 15:05:55.403342       7 log.go:172] (0x4001cc84d0) (0x4001a1ef00) Stream removed, broadcasting: 3
I0827 15:05:55.403401       7 log.go:172] (0x4001cc84d0) (0x4000ff2320) Stream removed, broadcasting: 5
Aug 27 15:05:55.403: INFO: Waiting for endpoints: map[]
Aug 27 15:05:55.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.229:8080/dial?request=hostName&protocol=udp&host=10.244.1.67&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-njpvt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:05:55.408: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:05:55.460143       7 log.go:172] (0x400099ed10) (0x40018595e0) Create stream
I0827 15:05:55.460365       7 log.go:172] (0x400099ed10) (0x40018595e0) Stream added, broadcasting: 1
I0827 15:05:55.463188       7 log.go:172] (0x400099ed10) Reply frame received for 1
I0827 15:05:55.463345       7 log.go:172] (0x400099ed10) (0x4001a1efa0) Create stream
I0827 15:05:55.463421       7 log.go:172] (0x400099ed10) (0x4001a1efa0) Stream added, broadcasting: 3
I0827 15:05:55.464829       7 log.go:172] (0x400099ed10) Reply frame received for 3
I0827 15:05:55.464997       7 log.go:172] (0x400099ed10) (0x4001859680) Create stream
I0827 15:05:55.465103       7 log.go:172] (0x400099ed10) (0x4001859680) Stream added, broadcasting: 5
I0827 15:05:55.466545       7 log.go:172] (0x400099ed10) Reply frame received for 5
I0827 15:05:55.538001       7 log.go:172] (0x400099ed10) Data frame received for 3
I0827 15:05:55.538153       7 log.go:172] (0x4001a1efa0) (3) Data frame handling
I0827 15:05:55.538289       7 log.go:172] (0x4001a1efa0) (3) Data frame sent
I0827 15:05:55.538399       7 log.go:172] (0x400099ed10) Data frame received for 3
I0827 15:05:55.538508       7 log.go:172] (0x4001a1efa0) (3) Data frame handling
I0827 15:05:55.538613       7 log.go:172] (0x400099ed10) Data frame received for 5
I0827 15:05:55.538722       7 log.go:172] (0x400099ed10) Data frame received for 1
I0827 15:05:55.538820       7 log.go:172] (0x40018595e0) (1) Data frame handling
I0827 15:05:55.538905       7 log.go:172] (0x4001859680) (5) Data frame handling
I0827 15:05:55.539057       7 log.go:172] (0x40018595e0) (1) Data frame sent
I0827 15:05:55.539177       7 log.go:172] (0x400099ed10) (0x40018595e0) Stream removed, broadcasting: 1
I0827 15:05:55.539295       7 log.go:172] (0x400099ed10) Go away received
I0827 15:05:55.539642       7 log.go:172] (0x400099ed10) (0x40018595e0) Stream removed, broadcasting: 1
I0827 15:05:55.539737       7 log.go:172] (0x400099ed10) (0x4001a1efa0) Stream removed, broadcasting: 3
I0827 15:05:55.539825       7 log.go:172] (0x400099ed10) (0x4001859680) Stream removed, broadcasting: 5
Aug 27 15:05:55.540: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:05:55.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-njpvt" for this suite.
Aug 27 15:06:34.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:06:34.151: INFO: namespace: e2e-tests-pod-network-test-njpvt, resource: bindings, ignored listing per whitelist
Aug 27 15:06:34.205: INFO: namespace e2e-tests-pod-network-test-njpvt deletion completed in 38.659988017s

• [SLOW TEST:77.379 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:06:34.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-ftfpx
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ftfpx to expose endpoints map[]
Aug 27 15:06:35.275: INFO: Get endpoints failed (5.535447ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 27 15:06:36.414: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ftfpx exposes endpoints map[] (1.14475695s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-ftfpx
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ftfpx to expose endpoints map[pod1:[80]]
Aug 27 15:06:42.592: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.952791694s elapsed, will retry)
Aug 27 15:06:48.955: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ftfpx exposes endpoints map[pod1:[80]] (12.315384431s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-ftfpx
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ftfpx to expose endpoints map[pod1:[80] pod2:[80]]
Aug 27 15:06:54.424: INFO: Unexpected endpoints: found map[e6edbd93-e876-11ea-a485-0242ac120004:[80]], expected map[pod1:[80] pod2:[80]] (5.463419325s elapsed, will retry)
Aug 27 15:06:56.454: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ftfpx exposes endpoints map[pod1:[80] pod2:[80]] (7.493517938s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-ftfpx
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ftfpx to expose endpoints map[pod2:[80]]
Aug 27 15:06:56.478: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ftfpx exposes endpoints map[pod2:[80]] (17.455477ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-ftfpx
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-ftfpx to expose endpoints map[]
Aug 27 15:06:56.543: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-ftfpx exposes endpoints map[] (60.211481ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:06:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-ftfpx" for this suite.
Aug 27 15:07:23.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:07:23.153: INFO: namespace: e2e-tests-services-ftfpx, resource: bindings, ignored listing per whitelist
Aug 27 15:07:23.221: INFO: namespace e2e-tests-services-ftfpx deletion completed in 26.215057669s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.014 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:07:23.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 27 15:07:31.557: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-02d9d510-e877-11ea-894e-0242ac11000d,GenerateName:,Namespace:e2e-tests-events-c7txg,SelfLink:/api/v1/namespaces/e2e-tests-events-c7txg/pods/send-events-02d9d510-e877-11ea-894e-0242ac11000d,UID:02de45d6-e877-11ea-a485-0242ac120004,ResourceVersion:2655490,Generation:0,CreationTimestamp:2020-08-27 15:07:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 480475707,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-trnmb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-trnmb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-trnmb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40003745f0} {node.kubernetes.io/unreachable Exists  NoExecute 0x4000374640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:07:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:07:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:07:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:07:23 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.69,StartTime:2020-08-27 15:07:23 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-27 15:07:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://50d0fbba107d9bf3dd590a3f3a518dc1dc458490c08e4213fb3f5936b730f4a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 27 15:07:33.569: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 27 15:07:35.826: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:07:36.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-c7txg" for this suite.
Aug 27 15:08:15.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:08:15.293: INFO: namespace: e2e-tests-events-c7txg, resource: bindings, ignored listing per whitelist
Aug 27 15:08:15.358: INFO: namespace e2e-tests-events-c7txg deletion completed in 38.295511295s

• [SLOW TEST:52.136 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:08:15.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-c7kkt
Aug 27 15:08:21.658: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-c7kkt
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 15:08:21.663: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:12:22.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c7kkt" for this suite.
Aug 27 15:12:28.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:12:28.988: INFO: namespace: e2e-tests-container-probe-c7kkt, resource: bindings, ignored listing per whitelist
Aug 27 15:12:29.029: INFO: namespace e2e-tests-container-probe-c7kkt deletion completed in 6.420800022s

• [SLOW TEST:253.670 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:12:29.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-b94a8c91-e877-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume configMaps
Aug 27 15:12:29.805: INFO: Waiting up to 5m0s for pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-z9gd7" to be "success or failure"
Aug 27 15:12:29.853: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.961012ms
Aug 27 15:12:31.876: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070985033s
Aug 27 15:12:34.062: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256777819s
Aug 27 15:12:36.068: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263215966s
Aug 27 15:12:38.667: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.862191032s
STEP: Saw pod success
Aug 27 15:12:38.668: INFO: Pod "pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:12:38.790: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d container configmap-volume-test: 
STEP: delete the pod
Aug 27 15:12:38.813: INFO: Waiting for pod pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d to disappear
Aug 27 15:12:38.866: INFO: Pod pod-configmaps-b96b5547-e877-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:12:38.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z9gd7" for this suite.
Aug 27 15:12:45.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:12:45.172: INFO: namespace: e2e-tests-configmap-z9gd7, resource: bindings, ignored listing per whitelist
Aug 27 15:12:45.273: INFO: namespace e2e-tests-configmap-z9gd7 deletion completed in 6.397350686s

• [SLOW TEST:16.244 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:12:45.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 27 15:12:45.681: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 27 15:12:50.688: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:12:51.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-l777n" for this suite.
Aug 27 15:13:06.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:13:07.158: INFO: namespace: e2e-tests-replication-controller-l777n, resource: bindings, ignored listing per whitelist
Aug 27 15:13:07.181: INFO: namespace e2e-tests-replication-controller-l777n deletion completed in 15.433278622s

• [SLOW TEST:21.907 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:13:07.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8jprp
Aug 27 15:13:14.535: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8jprp
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 15:13:14.539: INFO: Initial restart count of pod liveness-exec is 0
Aug 27 15:14:08.998: INFO: Restart count of pod e2e-tests-container-probe-8jprp/liveness-exec is now 1 (54.458580965s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:14:09.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-8jprp" for this suite.
Aug 27 15:14:17.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:14:17.172: INFO: namespace: e2e-tests-container-probe-8jprp, resource: bindings, ignored listing per whitelist
Aug 27 15:14:17.516: INFO: namespace e2e-tests-container-probe-8jprp deletion completed in 8.493818388s

• [SLOW TEST:70.333 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:14:17.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 15:14:19.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-pxw28'
Aug 27 15:14:31.576: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 15:14:31.576: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Aug 27 15:14:33.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-pxw28'
Aug 27 15:14:35.738: INFO: stderr: ""
Aug 27 15:14:35.738: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:14:35.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pxw28" for this suite.
Aug 27 15:14:44.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:14:44.941: INFO: namespace: e2e-tests-kubectl-pxw28, resource: bindings, ignored listing per whitelist
Aug 27 15:14:44.987: INFO: namespace e2e-tests-kubectl-pxw28 deletion completed in 9.171554679s

• [SLOW TEST:27.471 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:14:44.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 27 15:14:46.157: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 15:14:46.546: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 15:14:46.549: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 27 15:14:46.560: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.560: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 15:14:46.560: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.560: INFO: 	Container rally-a0035e6c-0q7zegi3 ready: true, restart count 89
Aug 27 15:14:46.560: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.560: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 15:14:46.561: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 27 15:14:46.571: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.571: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 15:14:46.571: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.571: INFO: 	Container rally-a0035e6c-x0kfgasz ready: true, restart count 89
Aug 27 15:14:46.571: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:14:46.571: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Aug 27 15:14:48.084: INFO: Pod rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w requesting resource cpu=0m on Node hunter-worker
Aug 27 15:14:48.084: INFO: Pod rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp requesting resource cpu=0m on Node hunter-worker2
Aug 27 15:14:48.084: INFO: Pod kindnet-kvcmt requesting resource cpu=100m on Node hunter-worker
Aug 27 15:14:48.084: INFO: Pod kindnet-l4sc5 requesting resource cpu=100m on Node hunter-worker2
Aug 27 15:14:48.084: INFO: Pod kube-proxy-7x47x requesting resource cpu=0m on Node hunter-worker2
Aug 27 15:14:48.084: INFO: Pod kube-proxy-xm64c requesting resource cpu=0m on Node hunter-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bdb20f1-e878-11ea-894e-0242ac11000d.162f28d523b1f8e7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cjpkx/filler-pod-0bdb20f1-e878-11ea-894e-0242ac11000d to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bdb20f1-e878-11ea-894e-0242ac11000d.162f28d5b5479799], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bdb20f1-e878-11ea-894e-0242ac11000d.162f28d64f520f03], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bdb20f1-e878-11ea-894e-0242ac11000d.162f28d66c98a994], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bffd241-e878-11ea-894e-0242ac11000d.162f28d5244e53cc], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cjpkx/filler-pod-0bffd241-e878-11ea-894e-0242ac11000d to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bffd241-e878-11ea-894e-0242ac11000d.162f28d5c72831c5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bffd241-e878-11ea-894e-0242ac11000d.162f28d679406606], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0bffd241-e878-11ea-894e-0242ac11000d.162f28d6884f7cc7], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162f28d703ccaac1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:14:58.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-cjpkx" for this suite.
Aug 27 15:15:06.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:15:06.734: INFO: namespace: e2e-tests-sched-pred-cjpkx, resource: bindings, ignored listing per whitelist
Aug 27 15:15:06.757: INFO: namespace e2e-tests-sched-pred-cjpkx deletion completed in 8.235873935s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:21.769 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:15:06.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:15:06.919: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 27 15:15:11.927: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 15:15:11.928: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 27 15:15:13.935: INFO: Creating deployment "test-rollover-deployment"
Aug 27 15:15:13.983: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 27 15:15:15.994: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 27 15:15:16.006: INFO: Ensure that both replica sets have 1 created replica
Aug 27 15:15:16.017: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 27 15:15:16.044: INFO: Updating deployment test-rollover-deployment
Aug 27 15:15:16.045: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 27 15:15:18.069: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 27 15:15:18.082: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 27 15:15:18.095: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:18.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138116, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:20.109: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:20.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138116, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:22.612: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:22.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138116, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:24.110: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:24.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138122, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:26.105: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:26.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138122, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:28.108: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:28.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138122, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:30.341: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:30.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138122, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:32.857: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 15:15:32.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138122, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:36.009: INFO: 
Aug 27 15:15:36.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138132, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734138114, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 15:15:37.236: INFO: 
Aug 27 15:15:37.237: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 15:15:37.487: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-s2c2j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s2c2j/deployments/test-rollover-deployment,UID:1b440c1d-e878-11ea-a485-0242ac120004,ResourceVersion:2656663,Generation:2,CreationTimestamp:2020-08-27 15:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-27 15:15:14 +0000 UTC 2020-08-27 15:15:14 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-27 15:15:35 +0000 UTC 2020-08-27 15:15:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 15:15:37.496: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-s2c2j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s2c2j/replicasets/test-rollover-deployment-5b8479fdb6,UID:1c858409-e878-11ea-a485-0242ac120004,ResourceVersion:2656651,Generation:2,CreationTimestamp:2020-08-27 15:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1b440c1d-e878-11ea-a485-0242ac120004 0x4001951b37 0x4001951b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 15:15:37.496: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 27 15:15:37.497: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-s2c2j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s2c2j/replicasets/test-rollover-controller,UID:171077a4-e878-11ea-a485-0242ac120004,ResourceVersion:2656662,Generation:2,CreationTimestamp:2020-08-27 15:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1b440c1d-e878-11ea-a485-0242ac120004 0x40019513af 0x40019513c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 15:15:37.498: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-s2c2j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s2c2j/replicasets/test-rollover-deployment-58494b7559,UID:1b4cac03-e878-11ea-a485-0242ac120004,ResourceVersion:2656612,Generation:2,CreationTimestamp:2020-08-27 15:15:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1b440c1d-e878-11ea-a485-0242ac120004 0x40019517a7 0x40019517a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 15:15:37.504: INFO: Pod "test-rollover-deployment-5b8479fdb6-7xzrj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-7xzrj,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-s2c2j,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s2c2j/pods/test-rollover-deployment-5b8479fdb6-7xzrj,UID:1c9e26c6-e878-11ea-a485-0242ac120004,ResourceVersion:2656629,Generation:0,CreationTimestamp:2020-08-27 15:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 1c858409-e878-11ea-a485-0242ac120004 0x4001306d27 0x4001306d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rt6qg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rt6qg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rt6qg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4001306e00} {node.kubernetes.io/unreachable Exists  NoExecute 0x4001306e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:15:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:15:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:15:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:15:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.235,StartTime:2020-08-27 15:15:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-27 15:15:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://50ff19e199ed97813285e8c6eb08715f95fa8b4481068f666decabd2d016578f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:15:37.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-s2c2j" for this suite.
Aug 27 15:15:47.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:15:47.668: INFO: namespace: e2e-tests-deployment-s2c2j, resource: bindings, ignored listing per whitelist
Aug 27 15:15:48.044: INFO: namespace e2e-tests-deployment-s2c2j deletion completed in 10.533966629s

• [SLOW TEST:41.286 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:15:48.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0827 15:16:29.809229       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 15:16:29.809: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:16:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-z27tf" for this suite.
Aug 27 15:16:53.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:16:54.095: INFO: namespace: e2e-tests-gc-z27tf, resource: bindings, ignored listing per whitelist
Aug 27 15:16:54.098: INFO: namespace e2e-tests-gc-z27tf deletion completed in 24.281868569s

• [SLOW TEST:66.052 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:16:54.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 27 15:16:54.240: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 15:16:54.270: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 15:16:54.274: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 27 15:16:54.308: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.308: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 15:16:54.308: INFO: rally-a0035e6c-0q7zegi3-7f9d59c68-b7x9w from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:15:14 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.308: INFO: 	Container rally-a0035e6c-0q7zegi3 ready: true, restart count 89
Aug 27 15:16:54.308: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.308: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 15:16:54.308: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 27 15:16:54.323: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.323: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 15:16:54.323: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.323: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 15:16:54.323: INFO: rally-a0035e6c-x0kfgasz-79fb6568cc-vpxdp from c-rally-a0035e6c-720erhyc started at 2020-08-23 21:14:52 +0000 UTC (1 container statuses recorded)
Aug 27 15:16:54.323: INFO: 	Container rally-a0035e6c-x0kfgasz ready: true, restart count 89
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5b44d2a1-e878-11ea-894e-0242ac11000d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-5b44d2a1-e878-11ea-894e-0242ac11000d off the node hunter-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5b44d2a1-e878-11ea-894e-0242ac11000d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:17:12.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-bm522" for this suite.
Aug 27 15:17:29.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:17:29.387: INFO: namespace: e2e-tests-sched-pred-bm522, resource: bindings, ignored listing per whitelist
Aug 27 15:17:29.390: INFO: namespace e2e-tests-sched-pred-bm522 deletion completed in 16.608849283s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:35.292 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:17:29.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Aug 27 15:17:30.222: INFO: Waiting up to 5m0s for pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d" in namespace "e2e-tests-containers-2dgw2" to be "success or failure"
Aug 27 15:17:30.237: INFO: Pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.472498ms
Aug 27 15:17:32.245: INFO: Pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022949576s
Aug 27 15:17:34.251: INFO: Pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029545755s
Aug 27 15:17:36.381: INFO: Pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159540933s
STEP: Saw pod success
Aug 27 15:17:36.382: INFO: Pod "client-containers-6c49db4d-e878-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:17:36.386: INFO: Trying to get logs from node hunter-worker pod client-containers-6c49db4d-e878-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:17:36.459: INFO: Waiting for pod client-containers-6c49db4d-e878-11ea-894e-0242ac11000d to disappear
Aug 27 15:17:36.734: INFO: Pod client-containers-6c49db4d-e878-11ea-894e-0242ac11000d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:17:36.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-2dgw2" for this suite.
Aug 27 15:17:44.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:17:45.001: INFO: namespace: e2e-tests-containers-2dgw2, resource: bindings, ignored listing per whitelist
Aug 27 15:17:45.060: INFO: namespace e2e-tests-containers-2dgw2 deletion completed in 8.316087096s

• [SLOW TEST:15.668 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:17:45.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Aug 27 15:17:45.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s9wnk'
Aug 27 15:17:47.565: INFO: stderr: ""
Aug 27 15:17:47.565: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Aug 27 15:17:48.572: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:48.572: INFO: Found 0 / 1
Aug 27 15:17:49.724: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:49.724: INFO: Found 0 / 1
Aug 27 15:17:51.071: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:51.071: INFO: Found 0 / 1
Aug 27 15:17:51.572: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:51.572: INFO: Found 0 / 1
Aug 27 15:17:52.796: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:52.796: INFO: Found 0 / 1
Aug 27 15:17:53.572: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:53.572: INFO: Found 0 / 1
Aug 27 15:17:55.250: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:55.251: INFO: Found 0 / 1
Aug 27 15:17:56.016: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:56.016: INFO: Found 1 / 1
Aug 27 15:17:56.017: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 15:17:56.022: INFO: Selector matched 1 pods for map[app:redis]
Aug 27 15:17:56.022: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 27 15:17:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk'
Aug 27 15:17:57.603: INFO: stderr: ""
Aug 27 15:17:57.603: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Aug 15:17:54.653 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 15:17:54.653 # Server started, Redis version 3.2.12\n1:M 27 Aug 15:17:54.653 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 15:17:54.653 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 27 15:17:57.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk --tail=1'
Aug 27 15:17:58.969: INFO: stderr: ""
Aug 27 15:17:58.969: INFO: stdout: "1:M 27 Aug 15:17:54.653 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 27 15:17:58.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk --limit-bytes=1'
Aug 27 15:18:00.309: INFO: stderr: ""
Aug 27 15:18:00.309: INFO: stdout: " "
STEP: exposing timestamps
Aug 27 15:18:00.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk --tail=1 --timestamps'
Aug 27 15:18:01.621: INFO: stderr: ""
Aug 27 15:18:01.621: INFO: stdout: "2020-08-27T15:17:54.653893735Z 1:M 27 Aug 15:17:54.653 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 27 15:18:04.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk --since=1s'
Aug 27 15:18:05.424: INFO: stderr: ""
Aug 27 15:18:05.424: INFO: stdout: ""
Aug 27 15:18:05.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rzd8c redis-master --namespace=e2e-tests-kubectl-s9wnk --since=24h'
Aug 27 15:18:06.940: INFO: stderr: ""
Aug 27 15:18:06.941: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Aug 15:17:54.653 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Aug 15:17:54.653 # Server started, Redis version 3.2.12\n1:M 27 Aug 15:17:54.653 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Aug 15:17:54.653 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Aug 27 15:18:06.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s9wnk'
Aug 27 15:18:08.284: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 15:18:08.284: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 27 15:18:08.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-s9wnk'
Aug 27 15:18:09.634: INFO: stderr: "No resources found.\n"
Aug 27 15:18:09.634: INFO: stdout: ""
Aug 27 15:18:09.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-s9wnk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 15:18:10.960: INFO: stderr: ""
Aug 27 15:18:10.960: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:18:10.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s9wnk" for this suite.
Aug 27 15:18:34.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:18:35.039: INFO: namespace: e2e-tests-kubectl-s9wnk, resource: bindings, ignored listing per whitelist
Aug 27 15:18:35.139: INFO: namespace e2e-tests-kubectl-s9wnk deletion completed in 24.171228947s

• [SLOW TEST:50.076 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:18:35.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Aug 27 15:18:50.426: INFO: 5 pods remaining
Aug 27 15:18:50.426: INFO: 5 pods has nil DeletionTimestamp
Aug 27 15:18:50.426: INFO: 
STEP: Gathering metrics
W0827 15:18:55.002958       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 15:18:55.003: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:18:55.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vk2sz" for this suite.
Aug 27 15:19:23.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:19:23.202: INFO: namespace: e2e-tests-gc-vk2sz, resource: bindings, ignored listing per whitelist
Aug 27 15:19:23.309: INFO: namespace e2e-tests-gc-vk2sz deletion completed in 28.296105617s

• [SLOW TEST:48.168 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:19:23.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:19:23.753: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Aug 27 15:19:23.770: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4qrr4/daemonsets","resourceVersion":"2657570"},"items":null}

Aug 27 15:19:23.772: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4qrr4/pods","resourceVersion":"2657570"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:19:23.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4qrr4" for this suite.
Aug 27 15:19:36.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:19:36.669: INFO: namespace: e2e-tests-daemonsets-4qrr4, resource: bindings, ignored listing per whitelist
Aug 27 15:19:36.733: INFO: namespace e2e-tests-daemonsets-4qrr4 deletion completed in 12.933140359s

S [SKIPPING] [13.424 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Aug 27 15:19:23.753: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:19:36.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:19:39.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-6t8sr" to be "success or failure"
Aug 27 15:19:39.521: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 326.985863ms
Aug 27 15:19:41.613: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418520397s
Aug 27 15:19:44.312: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.117985305s
Aug 27 15:19:46.319: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.124894464s
Aug 27 15:19:48.467: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.27274667s
Aug 27 15:19:50.472: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.277861596s
STEP: Saw pod success
Aug 27 15:19:50.472: INFO: Pod "downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:19:50.805: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:19:51.300: INFO: Waiting for pod downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d to disappear
Aug 27 15:19:51.543: INFO: Pod downwardapi-volume-b8cdfc4a-e878-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:19:51.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6t8sr" for this suite.
Aug 27 15:19:59.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:19:59.766: INFO: namespace: e2e-tests-downward-api-6t8sr, resource: bindings, ignored listing per whitelist
Aug 27 15:20:00.101: INFO: namespace e2e-tests-downward-api-6t8sr deletion completed in 8.546996749s

• [SLOW TEST:23.361 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:20:00.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4wfq4
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-4wfq4
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-4wfq4
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-4wfq4
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-4wfq4
Aug 27 15:20:09.699: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wfq4, name: ss-0, uid: cad9df9e-e878-11ea-a485-0242ac120004, status phase: Pending. Waiting for statefulset controller to delete.
Aug 27 15:20:18.082: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wfq4, name: ss-0, uid: cad9df9e-e878-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 15:20:18.118: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wfq4, name: ss-0, uid: cad9df9e-e878-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 15:20:18.169: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-4wfq4
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-4wfq4
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-4wfq4 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 27 15:20:28.438: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4wfq4
Aug 27 15:20:28.451: INFO: Scaling statefulset ss to 0
Aug 27 15:20:48.518: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 15:20:48.522: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:20:48.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4wfq4" for this suite.
Aug 27 15:20:59.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:20:59.473: INFO: namespace: e2e-tests-statefulset-4wfq4, resource: bindings, ignored listing per whitelist
Aug 27 15:20:59.480: INFO: namespace e2e-tests-statefulset-4wfq4 deletion completed in 10.473691165s

• [SLOW TEST:59.379 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:20:59.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 27 15:21:00.139: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:21:08.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cmmfx" for this suite.
Aug 27 15:21:16.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:21:16.831: INFO: namespace: e2e-tests-init-container-cmmfx, resource: bindings, ignored listing per whitelist
Aug 27 15:21:16.918: INFO: namespace e2e-tests-init-container-cmmfx deletion completed in 8.488650508s

• [SLOW TEST:17.437 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:21:16.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 27 15:21:17.154: INFO: Waiting up to 5m0s for pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-mkfzg" to be "success or failure"
Aug 27 15:21:17.159: INFO: Pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.635957ms
Aug 27 15:21:19.165: INFO: Pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010299186s
Aug 27 15:21:21.170: INFO: Pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.015452696s
Aug 27 15:21:23.203: INFO: Pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048682053s
STEP: Saw pod success
Aug 27 15:21:23.203: INFO: Pod "pod-f3c137cf-e878-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:21:23.207: INFO: Trying to get logs from node hunter-worker2 pod pod-f3c137cf-e878-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:21:23.286: INFO: Waiting for pod pod-f3c137cf-e878-11ea-894e-0242ac11000d to disappear
Aug 27 15:21:23.472: INFO: Pod pod-f3c137cf-e878-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:21:23.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mkfzg" for this suite.
Aug 27 15:21:30.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:21:30.095: INFO: namespace: e2e-tests-emptydir-mkfzg, resource: bindings, ignored listing per whitelist
Aug 27 15:21:30.103: INFO: namespace e2e-tests-emptydir-mkfzg deletion completed in 6.14622295s

• [SLOW TEST:13.184 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:21:30.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:21:41.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-8cfkq" for this suite.
Aug 27 15:22:29.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:22:29.835: INFO: namespace: e2e-tests-kubelet-test-8cfkq, resource: bindings, ignored listing per whitelist
Aug 27 15:22:31.642: INFO: namespace e2e-tests-kubelet-test-8cfkq deletion completed in 50.140613284s

• [SLOW TEST:61.539 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:22:31.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 15:22:32.522: INFO: Waiting up to 5m0s for pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-k8m6p" to be "success or failure"
Aug 27 15:22:32.828: INFO: Pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 305.99006ms
Aug 27 15:22:35.079: INFO: Pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557341695s
Aug 27 15:22:37.109: INFO: Pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587121198s
Aug 27 15:22:39.573: INFO: Pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.05098s
STEP: Saw pod success
Aug 27 15:22:39.573: INFO: Pod "downward-api-209522ca-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:22:39.577: INFO: Trying to get logs from node hunter-worker2 pod downward-api-209522ca-e879-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 15:22:39.969: INFO: Waiting for pod downward-api-209522ca-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:22:40.282: INFO: Pod downward-api-209522ca-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:22:40.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k8m6p" for this suite.
Aug 27 15:22:46.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:22:47.405: INFO: namespace: e2e-tests-downward-api-k8m6p, resource: bindings, ignored listing per whitelist
Aug 27 15:22:47.461: INFO: namespace e2e-tests-downward-api-k8m6p deletion completed in 6.807660693s

• [SLOW TEST:15.817 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:22:47.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Aug 27 15:22:48.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 27 15:22:49.594: INFO: stderr: ""
Aug 27 15:22:49.594: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:22:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l2wng" for this suite.
Aug 27 15:22:55.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:22:55.755: INFO: namespace: e2e-tests-kubectl-l2wng, resource: bindings, ignored listing per whitelist
Aug 27 15:22:55.829: INFO: namespace e2e-tests-kubectl-l2wng deletion completed in 6.227174284s

• [SLOW TEST:8.364 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:22:55.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 27 15:23:05.064: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:23:05.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-shz4k" for this suite.
Aug 27 15:23:31.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:23:31.609: INFO: namespace: e2e-tests-replicaset-shz4k, resource: bindings, ignored listing per whitelist
Aug 27 15:23:31.729: INFO: namespace e2e-tests-replicaset-shz4k deletion completed in 26.412967738s

• [SLOW TEST:35.898 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:23:31.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 27 15:23:31.942: INFO: Waiting up to 5m0s for pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-l6pcj" to be "success or failure"
Aug 27 15:23:31.950: INFO: Pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.295109ms
Aug 27 15:23:33.973: INFO: Pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030504361s
Aug 27 15:23:35.980: INFO: Pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.037002085s
Aug 27 15:23:37.986: INFO: Pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0431875s
STEP: Saw pod success
Aug 27 15:23:37.986: INFO: Pod "pod-4415f0b0-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:23:38.132: INFO: Trying to get logs from node hunter-worker pod pod-4415f0b0-e879-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:23:38.162: INFO: Waiting for pod pod-4415f0b0-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:23:38.173: INFO: Pod pod-4415f0b0-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:23:38.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l6pcj" for this suite.
Aug 27 15:23:48.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:23:48.381: INFO: namespace: e2e-tests-emptydir-l6pcj, resource: bindings, ignored listing per whitelist
Aug 27 15:23:48.452: INFO: namespace e2e-tests-emptydir-l6pcj deletion completed in 10.27125231s

• [SLOW TEST:16.723 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:23:48.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-wz85
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 15:23:48.972: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wz85" in namespace "e2e-tests-subpath-q5h9h" to be "success or failure"
Aug 27 15:23:49.046: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Pending", Reason="", readiness=false. Elapsed: 73.82531ms
Aug 27 15:23:51.584: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61183033s
Aug 27 15:23:53.589: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617137859s
Aug 27 15:23:55.804: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.831787633s
Aug 27 15:23:57.810: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838150392s
Aug 27 15:23:59.817: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 10.844861162s
Aug 27 15:24:01.824: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 12.852252379s
Aug 27 15:24:03.829: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 14.85687361s
Aug 27 15:24:05.835: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 16.863307425s
Aug 27 15:24:07.840: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 18.868145346s
Aug 27 15:24:09.938: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 20.965815979s
Aug 27 15:24:11.946: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 22.973739261s
Aug 27 15:24:13.953: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 24.980730834s
Aug 27 15:24:16.306: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 27.334031679s
Aug 27 15:24:18.613: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Running", Reason="", readiness=false. Elapsed: 29.641157374s
Aug 27 15:24:20.805: INFO: Pod "pod-subpath-test-secret-wz85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.832692205s
STEP: Saw pod success
Aug 27 15:24:20.805: INFO: Pod "pod-subpath-test-secret-wz85" satisfied condition "success or failure"
Aug 27 15:24:20.809: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-wz85 container test-container-subpath-secret-wz85: 
STEP: delete the pod
Aug 27 15:24:21.697: INFO: Waiting for pod pod-subpath-test-secret-wz85 to disappear
Aug 27 15:24:22.104: INFO: Pod pod-subpath-test-secret-wz85 no longer exists
STEP: Deleting pod pod-subpath-test-secret-wz85
Aug 27 15:24:22.104: INFO: Deleting pod "pod-subpath-test-secret-wz85" in namespace "e2e-tests-subpath-q5h9h"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:24:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-q5h9h" for this suite.
Aug 27 15:24:35.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:24:36.614: INFO: namespace: e2e-tests-subpath-q5h9h, resource: bindings, ignored listing per whitelist
Aug 27 15:24:36.630: INFO: namespace e2e-tests-subpath-q5h9h deletion completed in 13.74515933s

• [SLOW TEST:48.177 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:24:36.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Aug 27 15:24:36.987: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:24:38.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2rtgb" for this suite.
Aug 27 15:24:46.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:24:47.432: INFO: namespace: e2e-tests-kubectl-2rtgb, resource: bindings, ignored listing per whitelist
Aug 27 15:24:47.468: INFO: namespace e2e-tests-kubectl-2rtgb deletion completed in 9.211611481s

• [SLOW TEST:10.837 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:24:47.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 15:24:48.486: INFO: Waiting up to 5m0s for pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-j2qz2" to be "success or failure"
Aug 27 15:24:48.769: INFO: Pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 283.427854ms
Aug 27 15:24:50.864: INFO: Pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37852353s
Aug 27 15:24:53.027: INFO: Pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541246523s
Aug 27 15:24:55.034: INFO: Pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.547785249s
STEP: Saw pod success
Aug 27 15:24:55.034: INFO: Pod "downward-api-71afb6cb-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:24:55.231: INFO: Trying to get logs from node hunter-worker pod downward-api-71afb6cb-e879-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 15:24:55.368: INFO: Waiting for pod downward-api-71afb6cb-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:24:55.429: INFO: Pod downward-api-71afb6cb-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:24:55.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j2qz2" for this suite.
Aug 27 15:25:01.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:25:01.753: INFO: namespace: e2e-tests-downward-api-j2qz2, resource: bindings, ignored listing per whitelist
Aug 27 15:25:01.796: INFO: namespace e2e-tests-downward-api-j2qz2 deletion completed in 6.358498836s

• [SLOW TEST:14.327 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:25:01.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-79e9d7c5-e879-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:25:02.477: INFO: Waiting up to 5m0s for pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-fnk46" to be "success or failure"
Aug 27 15:25:02.627: INFO: Pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 150.595226ms
Aug 27 15:25:04.633: INFO: Pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156597198s
Aug 27 15:25:06.641: INFO: Pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16408983s
Aug 27 15:25:08.647: INFO: Pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.170285951s
STEP: Saw pod success
Aug 27 15:25:08.647: INFO: Pod "pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:25:08.652: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d container secret-volume-test: 
STEP: delete the pod
Aug 27 15:25:08.701: INFO: Waiting for pod pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:25:08.840: INFO: Pod pod-secrets-79ea9275-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:25:08.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fnk46" for this suite.
Aug 27 15:25:15.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:25:15.402: INFO: namespace: e2e-tests-secrets-fnk46, resource: bindings, ignored listing per whitelist
Aug 27 15:25:15.467: INFO: namespace e2e-tests-secrets-fnk46 deletion completed in 6.616915171s

• [SLOW TEST:13.670 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:25:15.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 27 15:25:16.608: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:25:29.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6vthj" for this suite.
Aug 27 15:25:55.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:25:55.640: INFO: namespace: e2e-tests-init-container-6vthj, resource: bindings, ignored listing per whitelist
Aug 27 15:25:55.660: INFO: namespace e2e-tests-init-container-6vthj deletion completed in 26.13976175s

• [SLOW TEST:40.191 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:25:55.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-99ee023a-e879-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume configMaps
Aug 27 15:25:56.228: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-9snqm" to be "success or failure"
Aug 27 15:25:56.263: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.502131ms
Aug 27 15:25:58.271: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04277922s
Aug 27 15:26:00.277: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049256555s
Aug 27 15:26:02.339: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111363442s
Aug 27 15:26:04.344: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116508255s
Aug 27 15:26:06.452: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.223773153s
STEP: Saw pod success
Aug 27 15:26:06.452: INFO: Pod "pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:26:06.469: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 15:26:06.532: INFO: Waiting for pod pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:26:06.535: INFO: Pod pod-projected-configmaps-99f84a9d-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:26:06.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9snqm" for this suite.
Aug 27 15:26:16.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:26:16.689: INFO: namespace: e2e-tests-projected-9snqm, resource: bindings, ignored listing per whitelist
Aug 27 15:26:16.733: INFO: namespace e2e-tests-projected-9snqm deletion completed in 10.175198528s

• [SLOW TEST:21.071 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:26:16.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-q8wz8
Aug 27 15:26:20.983: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-q8wz8
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 15:26:20.988: INFO: Initial restart count of pod liveness-http is 0
Aug 27 15:26:50.059: INFO: Restart count of pod e2e-tests-container-probe-q8wz8/liveness-http is now 1 (29.071322888s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:26:50.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-q8wz8" for this suite.
Aug 27 15:26:59.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:26:59.230: INFO: namespace: e2e-tests-container-probe-q8wz8, resource: bindings, ignored listing per whitelist
Aug 27 15:26:59.301: INFO: namespace e2e-tests-container-probe-q8wz8 deletion completed in 8.544598976s

• [SLOW TEST:42.567 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:26:59.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 27 15:26:59.792: INFO: Waiting up to 5m0s for pod "pod-bff8238e-e879-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-tprpb" to be "success or failure"
Aug 27 15:26:59.956: INFO: Pod "pod-bff8238e-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 163.633315ms
Aug 27 15:27:01.966: INFO: Pod "pod-bff8238e-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174174751s
Aug 27 15:27:03.972: INFO: Pod "pod-bff8238e-e879-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179662008s
Aug 27 15:27:05.978: INFO: Pod "pod-bff8238e-e879-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186342907s
STEP: Saw pod success
Aug 27 15:27:05.979: INFO: Pod "pod-bff8238e-e879-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:27:05.984: INFO: Trying to get logs from node hunter-worker2 pod pod-bff8238e-e879-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:27:06.154: INFO: Waiting for pod pod-bff8238e-e879-11ea-894e-0242ac11000d to disappear
Aug 27 15:27:06.167: INFO: Pod pod-bff8238e-e879-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:27:06.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tprpb" for this suite.
Aug 27 15:27:12.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:27:12.334: INFO: namespace: e2e-tests-emptydir-tprpb, resource: bindings, ignored listing per whitelist
Aug 27 15:27:12.367: INFO: namespace e2e-tests-emptydir-tprpb deletion completed in 6.19189348s

• [SLOW TEST:13.065 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:27:12.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 27 15:27:28.720: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:28.720: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:28.776285       7 log.go:172] (0x4001cc8370) (0x400112d400) Create stream
I0827 15:27:28.776441       7 log.go:172] (0x4001cc8370) (0x400112d400) Stream added, broadcasting: 1
I0827 15:27:28.779759       7 log.go:172] (0x4001cc8370) Reply frame received for 1
I0827 15:27:28.780027       7 log.go:172] (0x4001cc8370) (0x400112d5e0) Create stream
I0827 15:27:28.780117       7 log.go:172] (0x4001cc8370) (0x400112d5e0) Stream added, broadcasting: 3
I0827 15:27:28.781879       7 log.go:172] (0x4001cc8370) Reply frame received for 3
I0827 15:27:28.782000       7 log.go:172] (0x4001cc8370) (0x4001268f00) Create stream
I0827 15:27:28.782077       7 log.go:172] (0x4001cc8370) (0x4001268f00) Stream added, broadcasting: 5
I0827 15:27:28.783477       7 log.go:172] (0x4001cc8370) Reply frame received for 5
I0827 15:27:28.845487       7 log.go:172] (0x4001cc8370) Data frame received for 3
I0827 15:27:28.845651       7 log.go:172] (0x400112d5e0) (3) Data frame handling
I0827 15:27:28.845797       7 log.go:172] (0x4001cc8370) Data frame received for 5
I0827 15:27:28.846029       7 log.go:172] (0x4001268f00) (5) Data frame handling
I0827 15:27:28.846153       7 log.go:172] (0x400112d5e0) (3) Data frame sent
I0827 15:27:28.846268       7 log.go:172] (0x4001cc8370) Data frame received for 3
I0827 15:27:28.846351       7 log.go:172] (0x400112d5e0) (3) Data frame handling
I0827 15:27:28.846552       7 log.go:172] (0x4001cc8370) Data frame received for 1
I0827 15:27:28.846636       7 log.go:172] (0x400112d400) (1) Data frame handling
I0827 15:27:28.846723       7 log.go:172] (0x400112d400) (1) Data frame sent
I0827 15:27:28.846797       7 log.go:172] (0x4001cc8370) (0x400112d400) Stream removed, broadcasting: 1
I0827 15:27:28.846888       7 log.go:172] (0x4001cc8370) Go away received
I0827 15:27:28.847271       7 log.go:172] (0x4001cc8370) (0x400112d400) Stream removed, broadcasting: 1
I0827 15:27:28.847371       7 log.go:172] (0x4001cc8370) (0x400112d5e0) Stream removed, broadcasting: 3
I0827 15:27:28.847447       7 log.go:172] (0x4001cc8370) (0x4001268f00) Stream removed, broadcasting: 5
Aug 27 15:27:28.847: INFO: Exec stderr: ""
Aug 27 15:27:28.848: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:28.848: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:28.914041       7 log.go:172] (0x40003be9a0) (0x400136cfa0) Create stream
I0827 15:27:28.914171       7 log.go:172] (0x40003be9a0) (0x400136cfa0) Stream added, broadcasting: 1
I0827 15:27:28.918139       7 log.go:172] (0x40003be9a0) Reply frame received for 1
I0827 15:27:28.918411       7 log.go:172] (0x40003be9a0) (0x400136d040) Create stream
I0827 15:27:28.918496       7 log.go:172] (0x40003be9a0) (0x400136d040) Stream added, broadcasting: 3
I0827 15:27:28.919766       7 log.go:172] (0x40003be9a0) Reply frame received for 3
I0827 15:27:28.919923       7 log.go:172] (0x40003be9a0) (0x40020aa320) Create stream
I0827 15:27:28.920014       7 log.go:172] (0x40003be9a0) (0x40020aa320) Stream added, broadcasting: 5
I0827 15:27:28.921290       7 log.go:172] (0x40003be9a0) Reply frame received for 5
I0827 15:27:28.978634       7 log.go:172] (0x40003be9a0) Data frame received for 5
I0827 15:27:28.978770       7 log.go:172] (0x40020aa320) (5) Data frame handling
I0827 15:27:28.978946       7 log.go:172] (0x40003be9a0) Data frame received for 3
I0827 15:27:28.979170       7 log.go:172] (0x400136d040) (3) Data frame handling
I0827 15:27:28.979337       7 log.go:172] (0x400136d040) (3) Data frame sent
I0827 15:27:28.979463       7 log.go:172] (0x40003be9a0) Data frame received for 3
I0827 15:27:28.979579       7 log.go:172] (0x400136d040) (3) Data frame handling
I0827 15:27:28.979661       7 log.go:172] (0x40003be9a0) Data frame received for 1
I0827 15:27:28.979758       7 log.go:172] (0x400136cfa0) (1) Data frame handling
I0827 15:27:28.979847       7 log.go:172] (0x400136cfa0) (1) Data frame sent
I0827 15:27:28.979935       7 log.go:172] (0x40003be9a0) (0x400136cfa0) Stream removed, broadcasting: 1
I0827 15:27:28.980102       7 log.go:172] (0x40003be9a0) Go away received
I0827 15:27:28.980535       7 log.go:172] (0x40003be9a0) (0x400136cfa0) Stream removed, broadcasting: 1
I0827 15:27:28.980624       7 log.go:172] (0x40003be9a0) (0x400136d040) Stream removed, broadcasting: 3
I0827 15:27:28.980684       7 log.go:172] (0x40003be9a0) (0x40020aa320) Stream removed, broadcasting: 5
Aug 27 15:27:28.980: INFO: Exec stderr: ""
Aug 27 15:27:28.981: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:28.981: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.030499       7 log.go:172] (0x40015e6370) (0x4001269220) Create stream
I0827 15:27:29.030663       7 log.go:172] (0x40015e6370) (0x4001269220) Stream added, broadcasting: 1
I0827 15:27:29.033585       7 log.go:172] (0x40015e6370) Reply frame received for 1
I0827 15:27:29.033733       7 log.go:172] (0x40015e6370) (0x4001269360) Create stream
I0827 15:27:29.033814       7 log.go:172] (0x40015e6370) (0x4001269360) Stream added, broadcasting: 3
I0827 15:27:29.035349       7 log.go:172] (0x40015e6370) Reply frame received for 3
I0827 15:27:29.035545       7 log.go:172] (0x40015e6370) (0x400136d0e0) Create stream
I0827 15:27:29.035636       7 log.go:172] (0x40015e6370) (0x400136d0e0) Stream added, broadcasting: 5
I0827 15:27:29.037246       7 log.go:172] (0x40015e6370) Reply frame received for 5
I0827 15:27:29.109079       7 log.go:172] (0x40015e6370) Data frame received for 5
I0827 15:27:29.109225       7 log.go:172] (0x400136d0e0) (5) Data frame handling
I0827 15:27:29.109367       7 log.go:172] (0x40015e6370) Data frame received for 3
I0827 15:27:29.109496       7 log.go:172] (0x4001269360) (3) Data frame handling
I0827 15:27:29.109647       7 log.go:172] (0x4001269360) (3) Data frame sent
I0827 15:27:29.109780       7 log.go:172] (0x40015e6370) Data frame received for 3
I0827 15:27:29.109880       7 log.go:172] (0x4001269360) (3) Data frame handling
I0827 15:27:29.111142       7 log.go:172] (0x40015e6370) Data frame received for 1
I0827 15:27:29.111264       7 log.go:172] (0x4001269220) (1) Data frame handling
I0827 15:27:29.111372       7 log.go:172] (0x4001269220) (1) Data frame sent
I0827 15:27:29.111486       7 log.go:172] (0x40015e6370) (0x4001269220) Stream removed, broadcasting: 1
I0827 15:27:29.111624       7 log.go:172] (0x40015e6370) Go away received
I0827 15:27:29.112118       7 log.go:172] (0x40015e6370) (0x4001269220) Stream removed, broadcasting: 1
I0827 15:27:29.112297       7 log.go:172] (0x40015e6370) (0x4001269360) Stream removed, broadcasting: 3
I0827 15:27:29.112398       7 log.go:172] (0x40015e6370) (0x400136d0e0) Stream removed, broadcasting: 5
Aug 27 15:27:29.112: INFO: Exec stderr: ""
Aug 27 15:27:29.112: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.112: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.172352       7 log.go:172] (0x40003bee70) (0x400136d400) Create stream
I0827 15:27:29.172491       7 log.go:172] (0x40003bee70) (0x400136d400) Stream added, broadcasting: 1
I0827 15:27:29.176095       7 log.go:172] (0x40003bee70) Reply frame received for 1
I0827 15:27:29.176373       7 log.go:172] (0x40003bee70) (0x4001269400) Create stream
I0827 15:27:29.176504       7 log.go:172] (0x40003bee70) (0x4001269400) Stream added, broadcasting: 3
I0827 15:27:29.178257       7 log.go:172] (0x40003bee70) Reply frame received for 3
I0827 15:27:29.178415       7 log.go:172] (0x40003bee70) (0x400136d4a0) Create stream
I0827 15:27:29.178521       7 log.go:172] (0x40003bee70) (0x400136d4a0) Stream added, broadcasting: 5
I0827 15:27:29.180202       7 log.go:172] (0x40003bee70) Reply frame received for 5
I0827 15:27:29.246155       7 log.go:172] (0x40003bee70) Data frame received for 3
I0827 15:27:29.246326       7 log.go:172] (0x4001269400) (3) Data frame handling
I0827 15:27:29.246425       7 log.go:172] (0x4001269400) (3) Data frame sent
I0827 15:27:29.246513       7 log.go:172] (0x40003bee70) Data frame received for 3
I0827 15:27:29.246585       7 log.go:172] (0x40003bee70) Data frame received for 5
I0827 15:27:29.246689       7 log.go:172] (0x400136d4a0) (5) Data frame handling
I0827 15:27:29.246813       7 log.go:172] (0x4001269400) (3) Data frame handling
I0827 15:27:29.247547       7 log.go:172] (0x40003bee70) Data frame received for 1
I0827 15:27:29.247614       7 log.go:172] (0x400136d400) (1) Data frame handling
I0827 15:27:29.247679       7 log.go:172] (0x400136d400) (1) Data frame sent
I0827 15:27:29.247797       7 log.go:172] (0x40003bee70) (0x400136d400) Stream removed, broadcasting: 1
I0827 15:27:29.247935       7 log.go:172] (0x40003bee70) Go away received
I0827 15:27:29.248384       7 log.go:172] (0x40003bee70) (0x400136d400) Stream removed, broadcasting: 1
I0827 15:27:29.248480       7 log.go:172] (0x40003bee70) (0x4001269400) Stream removed, broadcasting: 3
I0827 15:27:29.248556       7 log.go:172] (0x40003bee70) (0x400136d4a0) Stream removed, broadcasting: 5
Aug 27 15:27:29.248: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 27 15:27:29.248: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.249: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.310674       7 log.go:172] (0x40003bf340) (0x400136d7c0) Create stream
I0827 15:27:29.310816       7 log.go:172] (0x40003bf340) (0x400136d7c0) Stream added, broadcasting: 1
I0827 15:27:29.315114       7 log.go:172] (0x40003bf340) Reply frame received for 1
I0827 15:27:29.315370       7 log.go:172] (0x40003bf340) (0x400136d860) Create stream
I0827 15:27:29.315497       7 log.go:172] (0x40003bf340) (0x400136d860) Stream added, broadcasting: 3
I0827 15:27:29.317682       7 log.go:172] (0x40003bf340) Reply frame received for 3
I0827 15:27:29.317960       7 log.go:172] (0x40003bf340) (0x40012694a0) Create stream
I0827 15:27:29.318065       7 log.go:172] (0x40003bf340) (0x40012694a0) Stream added, broadcasting: 5
I0827 15:27:29.319444       7 log.go:172] (0x40003bf340) Reply frame received for 5
I0827 15:27:29.393473       7 log.go:172] (0x40003bf340) Data frame received for 5
I0827 15:27:29.393627       7 log.go:172] (0x40012694a0) (5) Data frame handling
I0827 15:27:29.393756       7 log.go:172] (0x40003bf340) Data frame received for 3
I0827 15:27:29.393903       7 log.go:172] (0x400136d860) (3) Data frame handling
I0827 15:27:29.394013       7 log.go:172] (0x400136d860) (3) Data frame sent
I0827 15:27:29.394105       7 log.go:172] (0x40003bf340) Data frame received for 3
I0827 15:27:29.394187       7 log.go:172] (0x400136d860) (3) Data frame handling
I0827 15:27:29.395133       7 log.go:172] (0x40003bf340) Data frame received for 1
I0827 15:27:29.395246       7 log.go:172] (0x400136d7c0) (1) Data frame handling
I0827 15:27:29.395353       7 log.go:172] (0x400136d7c0) (1) Data frame sent
I0827 15:27:29.395466       7 log.go:172] (0x40003bf340) (0x400136d7c0) Stream removed, broadcasting: 1
I0827 15:27:29.395760       7 log.go:172] (0x40003bf340) Go away received
I0827 15:27:29.396658       7 log.go:172] (0x40003bf340) (0x400136d7c0) Stream removed, broadcasting: 1
I0827 15:27:29.396968       7 log.go:172] (0x40003bf340) (0x400136d860) Stream removed, broadcasting: 3
I0827 15:27:29.397103       7 log.go:172] (0x40003bf340) (0x40012694a0) Stream removed, broadcasting: 5
Aug 27 15:27:29.397: INFO: Exec stderr: ""
Aug 27 15:27:29.397: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.397: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.447499       7 log.go:172] (0x40003bf810) (0x400136da40) Create stream
I0827 15:27:29.447666       7 log.go:172] (0x40003bf810) (0x400136da40) Stream added, broadcasting: 1
I0827 15:27:29.452050       7 log.go:172] (0x40003bf810) Reply frame received for 1
I0827 15:27:29.452288       7 log.go:172] (0x40003bf810) (0x400136dae0) Create stream
I0827 15:27:29.452388       7 log.go:172] (0x40003bf810) (0x400136dae0) Stream added, broadcasting: 3
I0827 15:27:29.454176       7 log.go:172] (0x40003bf810) Reply frame received for 3
I0827 15:27:29.454320       7 log.go:172] (0x40003bf810) (0x40020aa500) Create stream
I0827 15:27:29.454394       7 log.go:172] (0x40003bf810) (0x40020aa500) Stream added, broadcasting: 5
I0827 15:27:29.455883       7 log.go:172] (0x40003bf810) Reply frame received for 5
I0827 15:27:29.531755       7 log.go:172] (0x40003bf810) Data frame received for 5
I0827 15:27:29.531949       7 log.go:172] (0x40020aa500) (5) Data frame handling
I0827 15:27:29.532126       7 log.go:172] (0x40003bf810) Data frame received for 3
I0827 15:27:29.532285       7 log.go:172] (0x400136dae0) (3) Data frame handling
I0827 15:27:29.532454       7 log.go:172] (0x400136dae0) (3) Data frame sent
I0827 15:27:29.532588       7 log.go:172] (0x40003bf810) Data frame received for 3
I0827 15:27:29.532696       7 log.go:172] (0x400136dae0) (3) Data frame handling
I0827 15:27:29.535438       7 log.go:172] (0x40003bf810) Data frame received for 1
I0827 15:27:29.535552       7 log.go:172] (0x400136da40) (1) Data frame handling
I0827 15:27:29.535699       7 log.go:172] (0x400136da40) (1) Data frame sent
I0827 15:27:29.535839       7 log.go:172] (0x40003bf810) (0x400136da40) Stream removed, broadcasting: 1
I0827 15:27:29.536041       7 log.go:172] (0x40003bf810) Go away received
I0827 15:27:29.536509       7 log.go:172] (0x40003bf810) (0x400136da40) Stream removed, broadcasting: 1
I0827 15:27:29.536669       7 log.go:172] (0x40003bf810) (0x400136dae0) Stream removed, broadcasting: 3
I0827 15:27:29.536844       7 log.go:172] (0x40003bf810) (0x40020aa500) Stream removed, broadcasting: 5
Aug 27 15:27:29.536: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 27 15:27:29.537: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.537: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.660258       7 log.go:172] (0x40014ca2c0) (0x40010f1720) Create stream
I0827 15:27:29.660405       7 log.go:172] (0x40014ca2c0) (0x40010f1720) Stream added, broadcasting: 1
I0827 15:27:29.665934       7 log.go:172] (0x40014ca2c0) Reply frame received for 1
I0827 15:27:29.666107       7 log.go:172] (0x40014ca2c0) (0x40020aa5a0) Create stream
I0827 15:27:29.666175       7 log.go:172] (0x40014ca2c0) (0x40020aa5a0) Stream added, broadcasting: 3
I0827 15:27:29.667528       7 log.go:172] (0x40014ca2c0) Reply frame received for 3
I0827 15:27:29.667655       7 log.go:172] (0x40014ca2c0) (0x40010f17c0) Create stream
I0827 15:27:29.667716       7 log.go:172] (0x40014ca2c0) (0x40010f17c0) Stream added, broadcasting: 5
I0827 15:27:29.668897       7 log.go:172] (0x40014ca2c0) Reply frame received for 5
I0827 15:27:29.722109       7 log.go:172] (0x40014ca2c0) Data frame received for 3
I0827 15:27:29.722268       7 log.go:172] (0x40020aa5a0) (3) Data frame handling
I0827 15:27:29.722368       7 log.go:172] (0x40014ca2c0) Data frame received for 5
I0827 15:27:29.722511       7 log.go:172] (0x40010f17c0) (5) Data frame handling
I0827 15:27:29.722645       7 log.go:172] (0x40020aa5a0) (3) Data frame sent
I0827 15:27:29.722757       7 log.go:172] (0x40014ca2c0) Data frame received for 3
I0827 15:27:29.722844       7 log.go:172] (0x40020aa5a0) (3) Data frame handling
I0827 15:27:29.723212       7 log.go:172] (0x40014ca2c0) Data frame received for 1
I0827 15:27:29.723287       7 log.go:172] (0x40010f1720) (1) Data frame handling
I0827 15:27:29.723375       7 log.go:172] (0x40010f1720) (1) Data frame sent
I0827 15:27:29.723441       7 log.go:172] (0x40014ca2c0) (0x40010f1720) Stream removed, broadcasting: 1
I0827 15:27:29.723520       7 log.go:172] (0x40014ca2c0) Go away received
I0827 15:27:29.723823       7 log.go:172] (0x40014ca2c0) (0x40010f1720) Stream removed, broadcasting: 1
I0827 15:27:29.723912       7 log.go:172] (0x40014ca2c0) (0x40020aa5a0) Stream removed, broadcasting: 3
I0827 15:27:29.724014       7 log.go:172] (0x40014ca2c0) (0x40010f17c0) Stream removed, broadcasting: 5
Aug 27 15:27:29.724: INFO: Exec stderr: ""
Aug 27 15:27:29.724: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.724: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.825900       7 log.go:172] (0x4001cc8840) (0x400112dae0) Create stream
I0827 15:27:29.826110       7 log.go:172] (0x4001cc8840) (0x400112dae0) Stream added, broadcasting: 1
I0827 15:27:29.829521       7 log.go:172] (0x4001cc8840) Reply frame received for 1
I0827 15:27:29.829701       7 log.go:172] (0x4001cc8840) (0x400136db80) Create stream
I0827 15:27:29.829842       7 log.go:172] (0x4001cc8840) (0x400136db80) Stream added, broadcasting: 3
I0827 15:27:29.831362       7 log.go:172] (0x4001cc8840) Reply frame received for 3
I0827 15:27:29.831512       7 log.go:172] (0x4001cc8840) (0x40020aa640) Create stream
I0827 15:27:29.831624       7 log.go:172] (0x4001cc8840) (0x40020aa640) Stream added, broadcasting: 5
I0827 15:27:29.834005       7 log.go:172] (0x4001cc8840) Reply frame received for 5
I0827 15:27:29.896247       7 log.go:172] (0x4001cc8840) Data frame received for 5
I0827 15:27:29.896381       7 log.go:172] (0x40020aa640) (5) Data frame handling
I0827 15:27:29.896477       7 log.go:172] (0x4001cc8840) Data frame received for 3
I0827 15:27:29.896579       7 log.go:172] (0x400136db80) (3) Data frame handling
I0827 15:27:29.896682       7 log.go:172] (0x400136db80) (3) Data frame sent
I0827 15:27:29.896858       7 log.go:172] (0x4001cc8840) Data frame received for 3
I0827 15:27:29.896980       7 log.go:172] (0x400136db80) (3) Data frame handling
I0827 15:27:29.897614       7 log.go:172] (0x4001cc8840) Data frame received for 1
I0827 15:27:29.897732       7 log.go:172] (0x400112dae0) (1) Data frame handling
I0827 15:27:29.897835       7 log.go:172] (0x400112dae0) (1) Data frame sent
I0827 15:27:29.897939       7 log.go:172] (0x4001cc8840) (0x400112dae0) Stream removed, broadcasting: 1
I0827 15:27:29.898064       7 log.go:172] (0x4001cc8840) Go away received
I0827 15:27:29.898460       7 log.go:172] (0x4001cc8840) (0x400112dae0) Stream removed, broadcasting: 1
I0827 15:27:29.898600       7 log.go:172] (0x4001cc8840) (0x400136db80) Stream removed, broadcasting: 3
I0827 15:27:29.898678       7 log.go:172] (0x4001cc8840) (0x40020aa640) Stream removed, broadcasting: 5
Aug 27 15:27:29.898: INFO: Exec stderr: ""
Aug 27 15:27:29.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:29.899: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:29.953907       7 log.go:172] (0x400099ee70) (0x4000917c20) Create stream
I0827 15:27:29.954075       7 log.go:172] (0x400099ee70) (0x4000917c20) Stream added, broadcasting: 1
I0827 15:27:29.957106       7 log.go:172] (0x400099ee70) Reply frame received for 1
I0827 15:27:29.957237       7 log.go:172] (0x400099ee70) (0x4000917d60) Create stream
I0827 15:27:29.957307       7 log.go:172] (0x400099ee70) (0x4000917d60) Stream added, broadcasting: 3
I0827 15:27:29.958559       7 log.go:172] (0x400099ee70) Reply frame received for 3
I0827 15:27:29.958685       7 log.go:172] (0x400099ee70) (0x4000917ea0) Create stream
I0827 15:27:29.958764       7 log.go:172] (0x400099ee70) (0x4000917ea0) Stream added, broadcasting: 5
I0827 15:27:29.959819       7 log.go:172] (0x400099ee70) Reply frame received for 5
I0827 15:27:30.005642       7 log.go:172] (0x400099ee70) Data frame received for 3
I0827 15:27:30.005796       7 log.go:172] (0x4000917d60) (3) Data frame handling
I0827 15:27:30.005874       7 log.go:172] (0x400099ee70) Data frame received for 5
I0827 15:27:30.005950       7 log.go:172] (0x4000917ea0) (5) Data frame handling
I0827 15:27:30.006027       7 log.go:172] (0x4000917d60) (3) Data frame sent
I0827 15:27:30.006102       7 log.go:172] (0x400099ee70) Data frame received for 3
I0827 15:27:30.006166       7 log.go:172] (0x4000917d60) (3) Data frame handling
I0827 15:27:30.006331       7 log.go:172] (0x400099ee70) Data frame received for 1
I0827 15:27:30.006407       7 log.go:172] (0x4000917c20) (1) Data frame handling
I0827 15:27:30.006488       7 log.go:172] (0x4000917c20) (1) Data frame sent
I0827 15:27:30.006575       7 log.go:172] (0x400099ee70) (0x4000917c20) Stream removed, broadcasting: 1
I0827 15:27:30.006667       7 log.go:172] (0x400099ee70) Go away received
I0827 15:27:30.007161       7 log.go:172] (0x400099ee70) (0x4000917c20) Stream removed, broadcasting: 1
I0827 15:27:30.007228       7 log.go:172] (0x400099ee70) (0x4000917d60) Stream removed, broadcasting: 3
I0827 15:27:30.007279       7 log.go:172] (0x400099ee70) (0x4000917ea0) Stream removed, broadcasting: 5
Aug 27 15:27:30.007: INFO: Exec stderr: ""
Aug 27 15:27:30.007: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4x6ff PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:27:30.007: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:27:30.059612       7 log.go:172] (0x4001cc8d10) (0x400112de00) Create stream
I0827 15:27:30.059768       7 log.go:172] (0x4001cc8d10) (0x400112de00) Stream added, broadcasting: 1
I0827 15:27:30.062500       7 log.go:172] (0x4001cc8d10) Reply frame received for 1
I0827 15:27:30.062627       7 log.go:172] (0x4001cc8d10) (0x400012c000) Create stream
I0827 15:27:30.062685       7 log.go:172] (0x4001cc8d10) (0x400012c000) Stream added, broadcasting: 3
I0827 15:27:30.063722       7 log.go:172] (0x4001cc8d10) Reply frame received for 3
I0827 15:27:30.063844       7 log.go:172] (0x4001cc8d10) (0x400112df40) Create stream
I0827 15:27:30.063913       7 log.go:172] (0x4001cc8d10) (0x400112df40) Stream added, broadcasting: 5
I0827 15:27:30.065304       7 log.go:172] (0x4001cc8d10) Reply frame received for 5
I0827 15:27:30.113123       7 log.go:172] (0x4001cc8d10) Data frame received for 5
I0827 15:27:30.113269       7 log.go:172] (0x400112df40) (5) Data frame handling
I0827 15:27:30.113389       7 log.go:172] (0x4001cc8d10) Data frame received for 3
I0827 15:27:30.113493       7 log.go:172] (0x400012c000) (3) Data frame handling
I0827 15:27:30.113565       7 log.go:172] (0x400012c000) (3) Data frame sent
I0827 15:27:30.113643       7 log.go:172] (0x4001cc8d10) Data frame received for 3
I0827 15:27:30.113734       7 log.go:172] (0x400012c000) (3) Data frame handling
I0827 15:27:30.113962       7 log.go:172] (0x4001cc8d10) Data frame received for 1
I0827 15:27:30.114023       7 log.go:172] (0x400112de00) (1) Data frame handling
I0827 15:27:30.114107       7 log.go:172] (0x400112de00) (1) Data frame sent
I0827 15:27:30.114186       7 log.go:172] (0x4001cc8d10) (0x400112de00) Stream removed, broadcasting: 1
I0827 15:27:30.114275       7 log.go:172] (0x4001cc8d10) Go away received
I0827 15:27:30.114605       7 log.go:172] (0x4001cc8d10) (0x400112de00) Stream removed, broadcasting: 1
I0827 15:27:30.114741       7 log.go:172] (0x4001cc8d10) (0x400012c000) Stream removed, broadcasting: 3
I0827 15:27:30.114842       7 log.go:172] (0x4001cc8d10) (0x400112df40) Stream removed, broadcasting: 5
Aug 27 15:27:30.114: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:27:30.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-4x6ff" for this suite.
Aug 27 15:28:24.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:28:24.998: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-4x6ff, resource: bindings, ignored listing per whitelist
Aug 27 15:28:25.087: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-4x6ff deletion completed in 54.617636534s

• [SLOW TEST:72.719 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:28:25.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 27 15:28:25.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:29.328: INFO: stderr: ""
Aug 27 15:28:29.328: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 15:28:29.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:30.598: INFO: stderr: ""
Aug 27 15:28:30.598: INFO: stdout: "update-demo-nautilus-678kn update-demo-nautilus-zwtzx "
Aug 27 15:28:30.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-678kn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:31.811: INFO: stderr: ""
Aug 27 15:28:31.811: INFO: stdout: ""
Aug 27 15:28:31.811: INFO: update-demo-nautilus-678kn is created but not running
Aug 27 15:28:36.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:38.153: INFO: stderr: ""
Aug 27 15:28:38.153: INFO: stdout: "update-demo-nautilus-678kn update-demo-nautilus-zwtzx "
Aug 27 15:28:38.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-678kn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:39.366: INFO: stderr: ""
Aug 27 15:28:39.366: INFO: stdout: "true"
Aug 27 15:28:39.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-678kn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:40.592: INFO: stderr: ""
Aug 27 15:28:40.592: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:28:40.592: INFO: validating pod update-demo-nautilus-678kn
Aug 27 15:28:40.598: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:28:40.598: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:28:40.598: INFO: update-demo-nautilus-678kn is verified up and running
Aug 27 15:28:40.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zwtzx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:41.871: INFO: stderr: ""
Aug 27 15:28:41.871: INFO: stdout: "true"
Aug 27 15:28:41.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zwtzx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:43.096: INFO: stderr: ""
Aug 27 15:28:43.096: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:28:43.096: INFO: validating pod update-demo-nautilus-zwtzx
Aug 27 15:28:43.113: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:28:43.113: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:28:43.113: INFO: update-demo-nautilus-zwtzx is verified up and running
STEP: using delete to clean up resources
Aug 27 15:28:43.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:44.383: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 15:28:44.383: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 15:28:44.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6bjft'
Aug 27 15:28:45.661: INFO: stderr: "No resources found.\n"
Aug 27 15:28:45.662: INFO: stdout: ""
Aug 27 15:28:45.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6bjft -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 15:28:46.966: INFO: stderr: ""
Aug 27 15:28:46.966: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:28:46.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6bjft" for this suite.
Aug 27 15:29:11.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:29:11.094: INFO: namespace: e2e-tests-kubectl-6bjft, resource: bindings, ignored listing per whitelist
Aug 27 15:29:11.139: INFO: namespace e2e-tests-kubectl-6bjft deletion completed in 24.161959663s

• [SLOW TEST:46.051 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:29:11.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 15:29:11.751: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:12.318: INFO: Number of nodes with available pods: 0
Aug 27 15:29:12.318: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:13.525: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:13.620: INFO: Number of nodes with available pods: 0
Aug 27 15:29:13.621: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:14.330: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:14.336: INFO: Number of nodes with available pods: 0
Aug 27 15:29:14.336: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:15.701: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:16.138: INFO: Number of nodes with available pods: 0
Aug 27 15:29:16.138: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:16.357: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:16.362: INFO: Number of nodes with available pods: 0
Aug 27 15:29:16.362: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:17.331: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:17.337: INFO: Number of nodes with available pods: 0
Aug 27 15:29:17.337: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:18.406: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:18.779: INFO: Number of nodes with available pods: 0
Aug 27 15:29:18.780: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:19.342: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:19.347: INFO: Number of nodes with available pods: 0
Aug 27 15:29:19.347: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:20.330: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:20.336: INFO: Number of nodes with available pods: 1
Aug 27 15:29:20.337: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 27 15:29:21.330: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:21.336: INFO: Number of nodes with available pods: 2
Aug 27 15:29:21.336: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 27 15:29:21.573: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:21.578: INFO: Number of nodes with available pods: 1
Aug 27 15:29:21.579: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:22.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:22.639: INFO: Number of nodes with available pods: 1
Aug 27 15:29:22.639: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:23.589: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:23.595: INFO: Number of nodes with available pods: 1
Aug 27 15:29:23.595: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:24.789: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:24.817: INFO: Number of nodes with available pods: 1
Aug 27 15:29:24.817: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:25.588: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:25.593: INFO: Number of nodes with available pods: 1
Aug 27 15:29:25.593: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:26.588: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:26.594: INFO: Number of nodes with available pods: 1
Aug 27 15:29:26.594: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:27.589: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:27.595: INFO: Number of nodes with available pods: 1
Aug 27 15:29:27.595: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:28.588: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:28.593: INFO: Number of nodes with available pods: 1
Aug 27 15:29:28.593: INFO: Node hunter-worker is running more than one daemon pod
Aug 27 15:29:29.590: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 15:29:29.597: INFO: Number of nodes with available pods: 2
Aug 27 15:29:29.597: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-prdc5, will wait for the garbage collector to delete the pods
Aug 27 15:29:29.665: INFO: Deleting DaemonSet.extensions daemon-set took: 7.296978ms
Aug 27 15:29:29.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.745576ms
Aug 27 15:29:34.992: INFO: Number of nodes with available pods: 0
Aug 27 15:29:34.992: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 15:29:34.995: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-prdc5/daemonsets","resourceVersion":"2659415"},"items":null}

Aug 27 15:29:34.998: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-prdc5/pods","resourceVersion":"2659415"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:29:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-prdc5" for this suite.
Aug 27 15:29:43.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:29:43.075: INFO: namespace: e2e-tests-daemonsets-prdc5, resource: bindings, ignored listing per whitelist
Aug 27 15:29:43.154: INFO: namespace e2e-tests-daemonsets-prdc5 deletion completed in 8.134913372s

• [SLOW TEST:32.013 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:29:43.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-spspc
Aug 27 15:29:53.676: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-spspc
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 15:29:53.679: INFO: Initial restart count of pod liveness-http is 0
Aug 27 15:30:10.513: INFO: Restart count of pod e2e-tests-container-probe-spspc/liveness-http is now 1 (16.834412506s elapsed)
Aug 27 15:30:30.740: INFO: Restart count of pod e2e-tests-container-probe-spspc/liveness-http is now 2 (37.061105222s elapsed)
Aug 27 15:30:52.504: INFO: Restart count of pod e2e-tests-container-probe-spspc/liveness-http is now 3 (58.825376646s elapsed)
Aug 27 15:31:10.913: INFO: Restart count of pod e2e-tests-container-probe-spspc/liveness-http is now 4 (1m17.233737637s elapsed)
Aug 27 15:32:12.086: INFO: Restart count of pod e2e-tests-container-probe-spspc/liveness-http is now 5 (2m18.406700822s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:32:12.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-spspc" for this suite.
Aug 27 15:32:23.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:32:23.151: INFO: namespace: e2e-tests-container-probe-spspc, resource: bindings, ignored listing per whitelist
Aug 27 15:32:23.249: INFO: namespace e2e-tests-container-probe-spspc deletion completed in 10.433684373s

• [SLOW TEST:160.094 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:32:23.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-814d3733-e87a-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:32:25.045: INFO: Waiting up to 5m0s for pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-dqx66" to be "success or failure"
Aug 27 15:32:25.716: INFO: Pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 670.815426ms
Aug 27 15:32:27.722: INFO: Pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676929564s
Aug 27 15:32:29.817: INFO: Pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.771612915s
Aug 27 15:32:31.824: INFO: Pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.778903884s
STEP: Saw pod success
Aug 27 15:32:31.824: INFO: Pod "pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:32:31.830: INFO: Trying to get logs from node hunter-worker pod pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d container secret-volume-test: 
STEP: delete the pod
Aug 27 15:32:31.874: INFO: Waiting for pod pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d to disappear
Aug 27 15:32:31.904: INFO: Pod pod-secrets-81c636cf-e87a-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:32:31.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dqx66" for this suite.
Aug 27 15:32:37.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:32:38.079: INFO: namespace: e2e-tests-secrets-dqx66, resource: bindings, ignored listing per whitelist
Aug 27 15:32:38.094: INFO: namespace e2e-tests-secrets-dqx66 deletion completed in 6.18139735s
STEP: Destroying namespace "e2e-tests-secret-namespace-m9sg5" for this suite.
Aug 27 15:32:44.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:32:44.195: INFO: namespace: e2e-tests-secret-namespace-m9sg5, resource: bindings, ignored listing per whitelist
Aug 27 15:32:44.743: INFO: namespace e2e-tests-secret-namespace-m9sg5 deletion completed in 6.64939311s

• [SLOW TEST:21.494 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:32:44.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 15:32:45.297: INFO: Waiting up to 5m0s for pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-nkkr9" to be "success or failure"
Aug 27 15:32:45.309: INFO: Pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.988622ms
Aug 27 15:32:47.681: INFO: Pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382890694s
Aug 27 15:32:49.687: INFO: Pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.388950167s
Aug 27 15:32:51.693: INFO: Pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395032277s
STEP: Saw pod success
Aug 27 15:32:51.693: INFO: Pod "downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:32:51.709: INFO: Trying to get logs from node hunter-worker pod downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 15:32:51.740: INFO: Waiting for pod downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d to disappear
Aug 27 15:32:51.763: INFO: Pod downward-api-8dde02f5-e87a-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:32:51.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nkkr9" for this suite.
Aug 27 15:33:08.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:33:08.411: INFO: namespace: e2e-tests-downward-api-nkkr9, resource: bindings, ignored listing per whitelist
Aug 27 15:33:09.778: INFO: namespace e2e-tests-downward-api-nkkr9 deletion completed in 18.006016687s

• [SLOW TEST:25.033 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:33:09.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Aug 27 15:33:13.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 27 15:33:17.194: INFO: stderr: ""
Aug 27 15:33:17.194: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:33:17.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6nfwh" for this suite.
Aug 27 15:33:26.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:33:26.771: INFO: namespace: e2e-tests-kubectl-6nfwh, resource: bindings, ignored listing per whitelist
Aug 27 15:33:27.090: INFO: namespace e2e-tests-kubectl-6nfwh deletion completed in 9.888081624s

• [SLOW TEST:17.312 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:33:27.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Aug 27 15:33:41.503: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:34:14.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-shx8r" for this suite.
Aug 27 15:34:20.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:34:20.957: INFO: namespace: e2e-tests-namespaces-shx8r, resource: bindings, ignored listing per whitelist
Aug 27 15:34:21.187: INFO: namespace e2e-tests-namespaces-shx8r deletion completed in 6.494458869s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7nws2" for this suite.
Aug 27 15:34:21.192: INFO: Namespace e2e-tests-nsdeletetest-7nws2 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-fwqrw" for this suite.
Aug 27 15:34:29.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:34:30.192: INFO: namespace: e2e-tests-nsdeletetest-fwqrw, resource: bindings, ignored listing per whitelist
Aug 27 15:34:30.195: INFO: namespace e2e-tests-nsdeletetest-fwqrw deletion completed in 9.003656498s

• [SLOW TEST:63.104 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:34:30.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 27 15:34:32.859: INFO: Waiting up to 5m0s for pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-pk68x" to be "success or failure"
Aug 27 15:34:33.363: INFO: Pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 503.822455ms
Aug 27 15:34:35.368: INFO: Pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509229529s
Aug 27 15:34:37.375: INFO: Pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515648666s
Aug 27 15:34:39.381: INFO: Pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.522057061s
STEP: Saw pod success
Aug 27 15:34:39.381: INFO: Pod "pod-cd9b738c-e87a-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:34:39.385: INFO: Trying to get logs from node hunter-worker pod pod-cd9b738c-e87a-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:34:39.408: INFO: Waiting for pod pod-cd9b738c-e87a-11ea-894e-0242ac11000d to disappear
Aug 27 15:34:39.424: INFO: Pod pod-cd9b738c-e87a-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:34:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pk68x" for this suite.
Aug 27 15:34:45.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:34:45.564: INFO: namespace: e2e-tests-emptydir-pk68x, resource: bindings, ignored listing per whitelist
Aug 27 15:34:45.601: INFO: namespace e2e-tests-emptydir-pk68x deletion completed in 6.167466201s

• [SLOW TEST:15.405 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:34:45.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d5ceaea5-e87a-11ea-894e-0242ac11000d
STEP: Creating configMap with name cm-test-opt-upd-d5cebf04-e87a-11ea-894e-0242ac11000d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d5ceaea5-e87a-11ea-894e-0242ac11000d
STEP: Updating configmap cm-test-opt-upd-d5cebf04-e87a-11ea-894e-0242ac11000d
STEP: Creating configMap with name cm-test-opt-create-d5cecb4b-e87a-11ea-894e-0242ac11000d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:34:58.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x2dlx" for this suite.
Aug 27 15:35:26.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:35:26.300: INFO: namespace: e2e-tests-projected-x2dlx, resource: bindings, ignored listing per whitelist
Aug 27 15:35:26.302: INFO: namespace e2e-tests-projected-x2dlx deletion completed in 28.137465028s

• [SLOW TEST:40.700 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:35:26.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:35:27.892: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 27 15:35:35.659: INFO: Waiting up to 5m0s for pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-jz9dl" to be "success or failure"
Aug 27 15:35:35.711: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.890038ms
Aug 27 15:35:37.717: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057700028s
Aug 27 15:35:39.724: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064662717s
Aug 27 15:35:42.058: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398602358s
Aug 27 15:35:44.065: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.405403622s
Aug 27 15:35:46.753: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.093697438s
STEP: Saw pod success
Aug 27 15:35:46.753: INFO: Pod "pod-f35a7bcc-e87a-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:35:46.758: INFO: Trying to get logs from node hunter-worker pod pod-f35a7bcc-e87a-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:35:46.828: INFO: Waiting for pod pod-f35a7bcc-e87a-11ea-894e-0242ac11000d to disappear
Aug 27 15:35:47.117: INFO: Pod pod-f35a7bcc-e87a-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:35:47.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jz9dl" for this suite.
Aug 27 15:35:59.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:35:59.796: INFO: namespace: e2e-tests-emptydir-jz9dl, resource: bindings, ignored listing per whitelist
Aug 27 15:35:59.809: INFO: namespace e2e-tests-emptydir-jz9dl deletion completed in 12.137953627s

• [SLOW TEST:24.639 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:35:59.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 27 15:36:01.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:04.625: INFO: stderr: ""
Aug 27 15:36:04.625: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 15:36:04.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:06.507: INFO: stderr: ""
Aug 27 15:36:06.507: INFO: stdout: "update-demo-nautilus-h4j8m update-demo-nautilus-s6dtp "
Aug 27 15:36:06.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:08.370: INFO: stderr: ""
Aug 27 15:36:08.371: INFO: stdout: ""
Aug 27 15:36:08.371: INFO: update-demo-nautilus-h4j8m is created but not running
Aug 27 15:36:13.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:14.803: INFO: stderr: ""
Aug 27 15:36:14.803: INFO: stdout: "update-demo-nautilus-h4j8m update-demo-nautilus-s6dtp "
Aug 27 15:36:14.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:16.206: INFO: stderr: ""
Aug 27 15:36:16.206: INFO: stdout: "true"
Aug 27 15:36:16.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:17.495: INFO: stderr: ""
Aug 27 15:36:17.495: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:36:17.495: INFO: validating pod update-demo-nautilus-h4j8m
Aug 27 15:36:17.501: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:36:17.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:36:17.501: INFO: update-demo-nautilus-h4j8m is verified up and running
Aug 27 15:36:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s6dtp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:18.789: INFO: stderr: ""
Aug 27 15:36:18.789: INFO: stdout: "true"
Aug 27 15:36:18.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s6dtp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:20.101: INFO: stderr: ""
Aug 27 15:36:20.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:36:20.101: INFO: validating pod update-demo-nautilus-s6dtp
Aug 27 15:36:20.107: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:36:20.107: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:36:20.107: INFO: update-demo-nautilus-s6dtp is verified up and running
STEP: scaling down the replication controller
Aug 27 15:36:20.114: INFO: scanned /root for discovery docs: 
Aug 27 15:36:20.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:22.620: INFO: stderr: ""
Aug 27 15:36:22.620: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 15:36:22.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:24.204: INFO: stderr: ""
Aug 27 15:36:24.205: INFO: stdout: "update-demo-nautilus-h4j8m update-demo-nautilus-s6dtp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 15:36:29.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:30.636: INFO: stderr: ""
Aug 27 15:36:30.637: INFO: stdout: "update-demo-nautilus-h4j8m "
Aug 27 15:36:30.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:32.023: INFO: stderr: ""
Aug 27 15:36:32.023: INFO: stdout: "true"
Aug 27 15:36:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:33.301: INFO: stderr: ""
Aug 27 15:36:33.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:36:33.301: INFO: validating pod update-demo-nautilus-h4j8m
Aug 27 15:36:33.338: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:36:33.339: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:36:33.339: INFO: update-demo-nautilus-h4j8m is verified up and running
STEP: scaling up the replication controller
Aug 27 15:36:33.348: INFO: scanned /root for discovery docs: 
Aug 27 15:36:33.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:35.977: INFO: stderr: ""
Aug 27 15:36:35.978: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 15:36:35.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:37.282: INFO: stderr: ""
Aug 27 15:36:37.283: INFO: stdout: "update-demo-nautilus-5zzxn update-demo-nautilus-h4j8m "
Aug 27 15:36:37.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zzxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:39.001: INFO: stderr: ""
Aug 27 15:36:39.001: INFO: stdout: ""
Aug 27 15:36:39.001: INFO: update-demo-nautilus-5zzxn is created but not running
Aug 27 15:36:44.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:45.278: INFO: stderr: ""
Aug 27 15:36:45.278: INFO: stdout: "update-demo-nautilus-5zzxn update-demo-nautilus-h4j8m "
Aug 27 15:36:45.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zzxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:46.741: INFO: stderr: ""
Aug 27 15:36:46.742: INFO: stdout: "true"
Aug 27 15:36:46.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5zzxn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:47.995: INFO: stderr: ""
Aug 27 15:36:47.995: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:36:47.995: INFO: validating pod update-demo-nautilus-5zzxn
Aug 27 15:36:48.000: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:36:48.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:36:48.001: INFO: update-demo-nautilus-5zzxn is verified up and running
Aug 27 15:36:48.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:49.274: INFO: stderr: ""
Aug 27 15:36:49.275: INFO: stdout: "true"
Aug 27 15:36:49.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h4j8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:50.526: INFO: stderr: ""
Aug 27 15:36:50.526: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 15:36:50.526: INFO: validating pod update-demo-nautilus-h4j8m
Aug 27 15:36:50.530: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 15:36:50.530: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 15:36:50.530: INFO: update-demo-nautilus-h4j8m is verified up and running
STEP: using delete to clean up resources
Aug 27 15:36:50.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:51.921: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 15:36:51.922: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 15:36:51.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kfkdj'
Aug 27 15:36:53.183: INFO: stderr: "No resources found.\n"
Aug 27 15:36:53.184: INFO: stdout: ""
Aug 27 15:36:53.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kfkdj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 15:36:54.454: INFO: stderr: ""
Aug 27 15:36:54.454: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:36:54.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kfkdj" for this suite.
Aug 27 15:37:04.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:37:04.587: INFO: namespace: e2e-tests-kubectl-kfkdj, resource: bindings, ignored listing per whitelist
Aug 27 15:37:04.643: INFO: namespace e2e-tests-kubectl-kfkdj deletion completed in 10.182039361s

• [SLOW TEST:64.833 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:37:04.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-289cea9d-e87b-11ea-894e-0242ac11000d
STEP: Creating secret with name s-test-opt-upd-289cf7b4-e87b-11ea-894e-0242ac11000d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-289cea9d-e87b-11ea-894e-0242ac11000d
STEP: Updating secret s-test-opt-upd-289cf7b4-e87b-11ea-894e-0242ac11000d
STEP: Creating secret with name s-test-opt-create-289d0346-e87b-11ea-894e-0242ac11000d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:37:17.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8rj6n" for this suite.
Aug 27 15:37:45.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:37:45.313: INFO: namespace: e2e-tests-projected-8rj6n, resource: bindings, ignored listing per whitelist
Aug 27 15:37:45.391: INFO: namespace e2e-tests-projected-8rj6n deletion completed in 28.309499045s

• [SLOW TEST:40.747 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:37:45.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:37:45.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-mrplk" to be "success or failure"
Aug 27 15:37:46.126: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 562.258018ms
Aug 27 15:37:48.395: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83096855s
Aug 27 15:37:50.402: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.838111741s
Aug 27 15:37:52.409: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84461175s
Aug 27 15:37:55.027: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.463000801s
Aug 27 15:37:57.032: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 11.468377036s
Aug 27 15:37:59.039: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.475150391s
STEP: Saw pod success
Aug 27 15:37:59.039: INFO: Pod "downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:37:59.870: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:38:00.741: INFO: Waiting for pod downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:38:00.872: INFO: Pod downwardapi-volume-40e41d6a-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:38:00.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mrplk" for this suite.
Aug 27 15:38:09.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:38:09.472: INFO: namespace: e2e-tests-downward-api-mrplk, resource: bindings, ignored listing per whitelist
Aug 27 15:38:09.533: INFO: namespace e2e-tests-downward-api-mrplk deletion completed in 8.651540927s

• [SLOW TEST:24.141 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:38:09.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:38:10.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-f4n5t" to be "success or failure"
Aug 27 15:38:10.813: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.416456ms
Aug 27 15:38:13.048: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273991344s
Aug 27 15:38:15.054: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280208411s
Aug 27 15:38:17.478: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704435068s
Aug 27 15:38:19.485: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.711712085s
Aug 27 15:38:21.491: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.716945054s
Aug 27 15:38:23.610: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.836718152s
STEP: Saw pod success
Aug 27 15:38:23.611: INFO: Pod "downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:38:23.620: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:38:23.796: INFO: Waiting for pod downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:38:24.209: INFO: Pod downwardapi-volume-4feb8059-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:38:24.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f4n5t" for this suite.
Aug 27 15:38:34.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:38:35.750: INFO: namespace: e2e-tests-projected-f4n5t, resource: bindings, ignored listing per whitelist
Aug 27 15:38:35.820: INFO: namespace e2e-tests-projected-f4n5t deletion completed in 11.599129068s

• [SLOW TEST:26.287 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:38:35.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-jvng
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 15:38:37.373: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jvng" in namespace "e2e-tests-subpath-9pv9q" to be "success or failure"
Aug 27 15:38:37.505: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 131.60393ms
Aug 27 15:38:39.513: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13923232s
Aug 27 15:38:41.520: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146428443s
Aug 27 15:38:43.598: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224790386s
Aug 27 15:38:45.996: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622271877s
Aug 27 15:38:48.007: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63316565s
Aug 27 15:38:50.012: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 12.638743333s
Aug 27 15:38:52.053: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Pending", Reason="", readiness=false. Elapsed: 14.679411933s
Aug 27 15:38:54.059: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 16.685978034s
Aug 27 15:38:56.067: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 18.69334743s
Aug 27 15:38:58.074: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 20.700465391s
Aug 27 15:39:00.083: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 22.709687596s
Aug 27 15:39:02.209: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 24.835842993s
Aug 27 15:39:04.216: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 26.842238837s
Aug 27 15:39:06.223: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 28.84933682s
Aug 27 15:39:08.229: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Running", Reason="", readiness=false. Elapsed: 30.855929417s
Aug 27 15:39:10.833: INFO: Pod "pod-subpath-test-configmap-jvng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.459216666s
STEP: Saw pod success
Aug 27 15:39:10.833: INFO: Pod "pod-subpath-test-configmap-jvng" satisfied condition "success or failure"
Aug 27 15:39:10.976: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-jvng container test-container-subpath-configmap-jvng: 
STEP: delete the pod
Aug 27 15:39:11.214: INFO: Waiting for pod pod-subpath-test-configmap-jvng to disappear
Aug 27 15:39:11.240: INFO: Pod pod-subpath-test-configmap-jvng no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jvng
Aug 27 15:39:11.240: INFO: Deleting pod "pod-subpath-test-configmap-jvng" in namespace "e2e-tests-subpath-9pv9q"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:39:11.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9pv9q" for this suite.
Aug 27 15:39:19.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:39:19.467: INFO: namespace: e2e-tests-subpath-9pv9q, resource: bindings, ignored listing per whitelist
Aug 27 15:39:19.474: INFO: namespace e2e-tests-subpath-9pv9q deletion completed in 8.218699318s

• [SLOW TEST:43.650 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:39:19.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 15:39:19.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6nvmr'
Aug 27 15:39:43.357: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 15:39:43.357: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug 27 15:39:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6nvmr'
Aug 27 15:39:48.389: INFO: stderr: ""
Aug 27 15:39:48.389: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:39:48.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6nvmr" for this suite.
Aug 27 15:39:58.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:39:58.684: INFO: namespace: e2e-tests-kubectl-6nvmr, resource: bindings, ignored listing per whitelist
Aug 27 15:39:58.745: INFO: namespace e2e-tests-kubectl-6nvmr deletion completed in 10.174626181s

• [SLOW TEST:39.271 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:39:58.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:40:20.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-ws7zv" for this suite.
Aug 27 15:40:49.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:40:49.532: INFO: namespace: e2e-tests-replication-controller-ws7zv, resource: bindings, ignored listing per whitelist
Aug 27 15:40:49.616: INFO: namespace e2e-tests-replication-controller-ws7zv deletion completed in 28.606204903s

• [SLOW TEST:50.869 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:40:49.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:40:50.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-5xn8p" to be "success or failure"
Aug 27 15:40:51.091: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 213.479857ms
Aug 27 15:40:53.383: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506144844s
Aug 27 15:40:56.072: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.194856521s
Aug 27 15:40:58.081: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 7.203701723s
Aug 27 15:41:00.105: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.228162755s
STEP: Saw pod success
Aug 27 15:41:00.106: INFO: Pod "downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:41:00.135: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:41:00.578: INFO: Waiting for pod downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:41:00.863: INFO: Pod downwardapi-volume-af0e54ea-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:41:00.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5xn8p" for this suite.
Aug 27 15:41:10.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:41:10.968: INFO: namespace: e2e-tests-downward-api-5xn8p, resource: bindings, ignored listing per whitelist
Aug 27 15:41:11.501: INFO: namespace e2e-tests-downward-api-5xn8p deletion completed in 10.625959266s

• [SLOW TEST:21.884 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:41:11.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bc624937-e87b-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:41:12.820: INFO: Waiting up to 5m0s for pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-6j44t" to be "success or failure"
Aug 27 15:41:13.072: INFO: Pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 252.179044ms
Aug 27 15:41:15.169: INFO: Pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348427005s
Aug 27 15:41:17.174: INFO: Pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353610762s
Aug 27 15:41:19.180: INFO: Pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359956304s
STEP: Saw pod success
Aug 27 15:41:19.181: INFO: Pod "pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:41:19.186: INFO: Trying to get logs from node hunter-worker pod pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d container secret-volume-test: 
STEP: delete the pod
Aug 27 15:41:19.221: INFO: Waiting for pod pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:41:19.224: INFO: Pod pod-secrets-bc6a89c8-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:41:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6j44t" for this suite.
Aug 27 15:41:25.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:41:25.278: INFO: namespace: e2e-tests-secrets-6j44t, resource: bindings, ignored listing per whitelist
Aug 27 15:41:25.345: INFO: namespace e2e-tests-secrets-6j44t deletion completed in 6.115785627s

• [SLOW TEST:13.843 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:41:25.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c403c640-e87b-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:41:25.552: INFO: Waiting up to 5m0s for pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-4xz5p" to be "success or failure"
Aug 27 15:41:25.622: INFO: Pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 70.565836ms
Aug 27 15:41:27.629: INFO: Pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076845145s
Aug 27 15:41:29.634: INFO: Pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.082434485s
Aug 27 15:41:31.639: INFO: Pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086720952s
STEP: Saw pod success
Aug 27 15:41:31.639: INFO: Pod "pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:41:31.641: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d container secret-volume-test: 
STEP: delete the pod
Aug 27 15:41:31.697: INFO: Waiting for pod pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:41:31.748: INFO: Pod pod-secrets-c4044c80-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:41:31.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4xz5p" for this suite.
Aug 27 15:41:39.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:41:39.925: INFO: namespace: e2e-tests-secrets-4xz5p, resource: bindings, ignored listing per whitelist
Aug 27 15:41:39.952: INFO: namespace e2e-tests-secrets-4xz5p deletion completed in 8.131782323s

• [SLOW TEST:14.607 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:41:39.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:41:40.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ns2b2" for this suite.
Aug 27 15:42:04.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:42:04.423: INFO: namespace: e2e-tests-kubelet-test-ns2b2, resource: bindings, ignored listing per whitelist
Aug 27 15:42:04.473: INFO: namespace e2e-tests-kubelet-test-ns2b2 deletion completed in 24.150134026s

• [SLOW TEST:24.521 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:42:04.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:42:05.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-bzjpf" to be "success or failure"
Aug 27 15:42:05.461: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.551715ms
Aug 27 15:42:07.467: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052865656s
Aug 27 15:42:09.574: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160433242s
Aug 27 15:42:11.601: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187547414s
Aug 27 15:42:13.871: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.457706445s
Aug 27 15:42:15.879: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.465641457s
STEP: Saw pod success
Aug 27 15:42:15.880: INFO: Pod "downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:42:15.972: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:42:16.049: INFO: Waiting for pod downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:42:16.214: INFO: Pod downwardapi-volume-dbbba595-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:42:16.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bzjpf" for this suite.
Aug 27 15:42:24.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:42:24.580: INFO: namespace: e2e-tests-downward-api-bzjpf, resource: bindings, ignored listing per whitelist
Aug 27 15:42:24.605: INFO: namespace e2e-tests-downward-api-bzjpf deletion completed in 8.381177234s

• [SLOW TEST:20.130 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:42:24.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 27 15:42:24.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:26.091: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 15:42:26.092: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Aug 27 15:42:26.153: INFO: scanned /root for discovery docs: 
Aug 27 15:42:26.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:47.775: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 27 15:42:47.775: INFO: stdout: "Created e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d\nScaling up e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 27 15:42:47.775: INFO: stdout: "Created e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d\nScaling up e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 27 15:42:47.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:49.133: INFO: stderr: ""
Aug 27 15:42:49.133: INFO: stdout: "e2e-test-nginx-rc-7mjxg e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d-22txc "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Aug 27 15:42:54.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:55.413: INFO: stderr: ""
Aug 27 15:42:55.414: INFO: stdout: "e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d-22txc "
Aug 27 15:42:55.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d-22txc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:56.797: INFO: stderr: ""
Aug 27 15:42:56.797: INFO: stdout: "true"
Aug 27 15:42:56.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d-22txc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:58.126: INFO: stderr: ""
Aug 27 15:42:58.126: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 27 15:42:58.126: INFO: e2e-test-nginx-rc-b8fac110c2adca76271c76531f73884d-22txc is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 27 15:42:58.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wf69h'
Aug 27 15:42:59.518: INFO: stderr: ""
Aug 27 15:42:59.518: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:42:59.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wf69h" for this suite.
Aug 27 15:43:05.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:43:05.794: INFO: namespace: e2e-tests-kubectl-wf69h, resource: bindings, ignored listing per whitelist
Aug 27 15:43:05.803: INFO: namespace e2e-tests-kubectl-wf69h deletion completed in 6.23211008s

• [SLOW TEST:41.197 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:43:05.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 27 15:43:06.207: INFO: Waiting up to 5m0s for pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-hstrv" to be "success or failure"
Aug 27 15:43:06.382: INFO: Pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 174.64244ms
Aug 27 15:43:08.387: INFO: Pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180073866s
Aug 27 15:43:10.408: INFO: Pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201236198s
Aug 27 15:43:12.414: INFO: Pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207228268s
STEP: Saw pod success
Aug 27 15:43:12.414: INFO: Pod "pod-fffe3696-e87b-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:43:12.419: INFO: Trying to get logs from node hunter-worker2 pod pod-fffe3696-e87b-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 15:43:12.456: INFO: Waiting for pod pod-fffe3696-e87b-11ea-894e-0242ac11000d to disappear
Aug 27 15:43:12.690: INFO: Pod pod-fffe3696-e87b-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:43:12.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hstrv" for this suite.
Aug 27 15:43:18.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:43:19.028: INFO: namespace: e2e-tests-emptydir-hstrv, resource: bindings, ignored listing per whitelist
Aug 27 15:43:19.038: INFO: namespace e2e-tests-emptydir-hstrv deletion completed in 6.337277401s

• [SLOW TEST:13.234 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:43:19.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:43:19.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-xqmwb" to be "success or failure"
Aug 27 15:43:19.425: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.991635ms
Aug 27 15:43:21.442: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05297165s
Aug 27 15:43:23.667: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27752405s
Aug 27 15:43:25.673: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283732983s
Aug 27 15:43:28.177: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.787171211s
Aug 27 15:43:30.411: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.021694123s
STEP: Saw pod success
Aug 27 15:43:30.411: INFO: Pod "downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:43:31.668: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:43:32.209: INFO: Waiting for pod downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d to disappear
Aug 27 15:43:32.469: INFO: Pod downwardapi-volume-07de4e4f-e87c-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:43:32.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xqmwb" for this suite.
Aug 27 15:43:42.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:43:42.813: INFO: namespace: e2e-tests-projected-xqmwb, resource: bindings, ignored listing per whitelist
Aug 27 15:43:42.831: INFO: namespace e2e-tests-projected-xqmwb deletion completed in 10.302807553s

• [SLOW TEST:23.788 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:43:42.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:44:14.643: INFO: Container started at 2020-08-27 15:43:55 +0000 UTC, pod became ready at 2020-08-27 15:44:12 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:44:14.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fp6z6" for this suite.
Aug 27 15:44:39.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:44:39.051: INFO: namespace: e2e-tests-container-probe-fp6z6, resource: bindings, ignored listing per whitelist
Aug 27 15:44:39.256: INFO: namespace e2e-tests-container-probe-fp6z6 deletion completed in 24.603218338s

• [SLOW TEST:56.423 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:44:39.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:44:39.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-h75gb" to be "success or failure"
Aug 27 15:44:39.929: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 187.955693ms
Aug 27 15:44:42.297: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55597126s
Aug 27 15:44:44.304: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562781432s
Aug 27 15:44:46.396: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655104672s
Aug 27 15:44:48.550: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.808680603s
STEP: Saw pod success
Aug 27 15:44:48.550: INFO: Pod "downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:44:48.638: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:44:48.798: INFO: Waiting for pod downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d to disappear
Aug 27 15:44:48.822: INFO: Pod downwardapi-volume-37af8a4e-e87c-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:44:48.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h75gb" for this suite.
Aug 27 15:44:54.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:44:55.005: INFO: namespace: e2e-tests-projected-h75gb, resource: bindings, ignored listing per whitelist
Aug 27 15:44:55.028: INFO: namespace e2e-tests-projected-h75gb deletion completed in 6.196434669s

• [SLOW TEST:15.771 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:44:55.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-41069689-e87c-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 15:44:55.334: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-drrjm" to be "success or failure"
Aug 27 15:44:55.453: INFO: Pod "pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 118.124996ms
Aug 27 15:44:57.460: INFO: Pod "pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124968019s
Aug 27 15:44:59.467: INFO: Pod "pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132006223s
STEP: Saw pod success
Aug 27 15:44:59.467: INFO: Pod "pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:44:59.472: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 15:44:59.494: INFO: Waiting for pod pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d to disappear
Aug 27 15:44:59.522: INFO: Pod pod-projected-secrets-410998da-e87c-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:44:59.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-drrjm" for this suite.
Aug 27 15:45:05.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:45:05.687: INFO: namespace: e2e-tests-projected-drrjm, resource: bindings, ignored listing per whitelist
Aug 27 15:45:05.776: INFO: namespace e2e-tests-projected-drrjm deletion completed in 6.210844492s

• [SLOW TEST:10.748 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:45:05.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:45:06.008: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 27 15:45:11.015: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 15:45:11.017: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 15:45:11.192: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-cckrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cckrn/deployments/test-cleanup-deployment,UID:4a6a12ee-e87c-11ea-a485-0242ac120004,ResourceVersion:2662013,Generation:1,CreationTimestamp:2020-08-27 15:45:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 15:45:11.242: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-cckrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cckrn/replicasets/test-cleanup-deployment-6df768c57,UID:4a6fbd73-e87c-11ea-a485-0242ac120004,ResourceVersion:2662015,Generation:1,CreationTimestamp:2020-08-27 15:45:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4a6a12ee-e87c-11ea-a485-0242ac120004 0x40015cab80 0x40015cab81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 15:45:11.242: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 27 15:45:11.243: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-cckrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cckrn/replicasets/test-cleanup-controller,UID:476218a8-e87c-11ea-a485-0242ac120004,ResourceVersion:2662014,Generation:1,CreationTimestamp:2020-08-27 15:45:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4a6a12ee-e87c-11ea-a485-0242ac120004 0x40015caa47 0x40015caa48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 27 15:45:11.281: INFO: Pod "test-cleanup-controller-kjczq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kjczq,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-cckrn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cckrn/pods/test-cleanup-controller-kjczq,UID:476cc685-e87c-11ea-a485-0242ac120004,ResourceVersion:2662009,Generation:0,CreationTimestamp:2020-08-27 15:45:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 476218a8-e87c-11ea-a485-0242ac120004 0x4002302217 0x4002302218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zkhx4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zkhx4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zkhx4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002302350} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002302370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:45:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:45:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:45:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:45:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.16,StartTime:2020-08-27 15:45:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-27 15:45:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://18e5f736673ab39f6a93bcd362743ce7fe2fb91a19094b0978f2e415e150c699}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 27 15:45:11.282: INFO: Pod "test-cleanup-deployment-6df768c57-j8zvl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57-j8zvl,GenerateName:test-cleanup-deployment-6df768c57-,Namespace:e2e-tests-deployment-cckrn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cckrn/pods/test-cleanup-deployment-6df768c57-j8zvl,UID:4a82b2be-e87c-11ea-a485-0242ac120004,ResourceVersion:2662019,Generation:0,CreationTimestamp:2020-08-27 15:45:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-6df768c57 4a6fbd73-e87c-11ea-a485-0242ac120004 0x4002302440 0x4002302441}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zkhx4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zkhx4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-zkhx4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002302540} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002302560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 15:45:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:45:11.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-cckrn" for this suite.
Aug 27 15:45:19.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:45:20.517: INFO: namespace: e2e-tests-deployment-cckrn, resource: bindings, ignored listing per whitelist
Aug 27 15:45:20.551: INFO: namespace e2e-tests-deployment-cckrn deletion completed in 9.180246113s

• [SLOW TEST:14.772 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:45:20.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-509d5284-e87c-11ea-894e-0242ac11000d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-509d5284-e87c-11ea-894e-0242ac11000d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:45:29.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t7l2p" for this suite.
Aug 27 15:45:54.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:45:54.238: INFO: namespace: e2e-tests-configmap-t7l2p, resource: bindings, ignored listing per whitelist
Aug 27 15:45:54.344: INFO: namespace e2e-tests-configmap-t7l2p deletion completed in 24.408924143s

• [SLOW TEST:33.792 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:45:54.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 27 15:45:59.223: INFO: Pod name wrapped-volume-race-671a3a14-e87c-11ea-894e-0242ac11000d: Found 0 pods out of 5
Aug 27 15:46:04.246: INFO: Pod name wrapped-volume-race-671a3a14-e87c-11ea-894e-0242ac11000d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-671a3a14-e87c-11ea-894e-0242ac11000d in namespace e2e-tests-emptydir-wrapper-wkpn9, will wait for the garbage collector to delete the pods
Aug 27 15:48:36.947: INFO: Deleting ReplicationController wrapped-volume-race-671a3a14-e87c-11ea-894e-0242ac11000d took: 16.725798ms
Aug 27 15:48:37.148: INFO: Terminating ReplicationController wrapped-volume-race-671a3a14-e87c-11ea-894e-0242ac11000d pods took: 201.015827ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 15:49:31.115: INFO: Pod name wrapped-volume-race-e3f80dc4-e87c-11ea-894e-0242ac11000d: Found 0 pods out of 5
Aug 27 15:49:36.150: INFO: Pod name wrapped-volume-race-e3f80dc4-e87c-11ea-894e-0242ac11000d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e3f80dc4-e87c-11ea-894e-0242ac11000d in namespace e2e-tests-emptydir-wrapper-wkpn9, will wait for the garbage collector to delete the pods
Aug 27 15:52:18.363: INFO: Deleting ReplicationController wrapped-volume-race-e3f80dc4-e87c-11ea-894e-0242ac11000d took: 16.305621ms
Aug 27 15:52:18.464: INFO: Terminating ReplicationController wrapped-volume-race-e3f80dc4-e87c-11ea-894e-0242ac11000d pods took: 100.669009ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 15:53:09.178: INFO: Pod name wrapped-volume-race-6758d18c-e87d-11ea-894e-0242ac11000d: Found 0 pods out of 5
Aug 27 15:53:14.196: INFO: Pod name wrapped-volume-race-6758d18c-e87d-11ea-894e-0242ac11000d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6758d18c-e87d-11ea-894e-0242ac11000d in namespace e2e-tests-emptydir-wrapper-wkpn9, will wait for the garbage collector to delete the pods
Aug 27 15:55:33.119: INFO: Deleting ReplicationController wrapped-volume-race-6758d18c-e87d-11ea-894e-0242ac11000d took: 7.350675ms
Aug 27 15:55:34.720: INFO: Terminating ReplicationController wrapped-volume-race-6758d18c-e87d-11ea-894e-0242ac11000d pods took: 1.600546385s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:56:30.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-wkpn9" for this suite.
Aug 27 15:56:40.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:56:40.840: INFO: namespace: e2e-tests-emptydir-wrapper-wkpn9, resource: bindings, ignored listing per whitelist
Aug 27 15:56:40.898: INFO: namespace e2e-tests-emptydir-wrapper-wkpn9 deletion completed in 10.131645992s

• [SLOW TEST:646.554 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:56:40.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 15:56:41.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 27 15:56:42.379: INFO: stderr: ""
Aug 27 15:56:42.380: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:38:29Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:56:42.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-djtfn" for this suite.
Aug 27 15:56:48.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:56:48.502: INFO: namespace: e2e-tests-kubectl-djtfn, resource: bindings, ignored listing per whitelist
Aug 27 15:56:48.505: INFO: namespace e2e-tests-kubectl-djtfn deletion completed in 6.117473703s

• [SLOW TEST:7.606 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:56:48.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 27 15:57:00.843: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:00.847: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:02.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:02.852: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:04.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:04.980: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:06.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:06.853: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:08.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:08.852: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:10.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:10.853: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:12.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:13.692: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:14.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:14.852: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:16.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:16.919: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:18.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:18.852: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:20.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:20.853: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:22.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:22.884: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:24.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:24.853: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:26.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:26.938: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 15:57:28.848: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 15:57:29.457: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:57:29.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bxq9c" for this suite.
Aug 27 15:58:03.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:58:03.960: INFO: namespace: e2e-tests-container-lifecycle-hook-bxq9c, resource: bindings, ignored listing per whitelist
Aug 27 15:58:03.975: INFO: namespace e2e-tests-container-lifecycle-hook-bxq9c deletion completed in 34.326736578s

• [SLOW TEST:75.469 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:58:03.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Aug 27 15:58:11.025: INFO: Successfully updated pod "labelsupdate174d3455-e87e-11ea-894e-0242ac11000d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:58:13.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xv8bk" for this suite.
Aug 27 15:58:35.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:58:35.152: INFO: namespace: e2e-tests-downward-api-xv8bk, resource: bindings, ignored listing per whitelist
Aug 27 15:58:35.202: INFO: namespace e2e-tests-downward-api-xv8bk deletion completed in 22.149179236s

• [SLOW TEST:31.227 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:58:35.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 15:58:35.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-tc9df" to be "success or failure"
Aug 27 15:58:35.473: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.121963ms
Aug 27 15:58:37.481: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060128733s
Aug 27 15:58:39.555: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133797007s
Aug 27 15:58:41.568: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146724039s
Aug 27 15:58:43.735: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314315088s
Aug 27 15:58:45.747: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 10.325651892s
Aug 27 15:58:47.753: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.331640508s
STEP: Saw pod success
Aug 27 15:58:47.753: INFO: Pod "downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 15:58:47.766: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 15:58:47.898: INFO: Waiting for pod downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 15:58:47.944: INFO: Pod downwardapi-volume-29dd03a0-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:58:47.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tc9df" for this suite.
Aug 27 15:58:56.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 15:58:56.670: INFO: namespace: e2e-tests-projected-tc9df, resource: bindings, ignored listing per whitelist
Aug 27 15:58:57.502: INFO: namespace e2e-tests-projected-tc9df deletion completed in 9.551932244s

• [SLOW TEST:22.299 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 15:58:57.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-62gvh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 15:58:59.366: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 15:59:40.265: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.20 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-62gvh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:59:40.265: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:59:40.677792       7 log.go:172] (0x4001cc84d0) (0x4001a1e3c0) Create stream
I0827 15:59:40.677906       7 log.go:172] (0x4001cc84d0) (0x4001a1e3c0) Stream added, broadcasting: 1
I0827 15:59:40.680368       7 log.go:172] (0x4001cc84d0) Reply frame received for 1
I0827 15:59:40.680499       7 log.go:172] (0x4001cc84d0) (0x4000a1a140) Create stream
I0827 15:59:40.680551       7 log.go:172] (0x4001cc84d0) (0x4000a1a140) Stream added, broadcasting: 3
I0827 15:59:40.681663       7 log.go:172] (0x4001cc84d0) Reply frame received for 3
I0827 15:59:40.681753       7 log.go:172] (0x4001cc84d0) (0x4001a1e500) Create stream
I0827 15:59:40.681801       7 log.go:172] (0x4001cc84d0) (0x4001a1e500) Stream added, broadcasting: 5
I0827 15:59:40.682681       7 log.go:172] (0x4001cc84d0) Reply frame received for 5
I0827 15:59:41.834061       7 log.go:172] (0x4001cc84d0) Data frame received for 3
I0827 15:59:41.834218       7 log.go:172] (0x4000a1a140) (3) Data frame handling
I0827 15:59:41.834356       7 log.go:172] (0x4000a1a140) (3) Data frame sent
I0827 15:59:41.834480       7 log.go:172] (0x4001cc84d0) Data frame received for 3
I0827 15:59:41.834619       7 log.go:172] (0x4001cc84d0) Data frame received for 5
I0827 15:59:41.834859       7 log.go:172] (0x4001a1e500) (5) Data frame handling
I0827 15:59:41.835051       7 log.go:172] (0x4000a1a140) (3) Data frame handling
I0827 15:59:41.836069       7 log.go:172] (0x4001cc84d0) Data frame received for 1
I0827 15:59:41.836175       7 log.go:172] (0x4001a1e3c0) (1) Data frame handling
I0827 15:59:41.836410       7 log.go:172] (0x4001a1e3c0) (1) Data frame sent
I0827 15:59:41.836532       7 log.go:172] (0x4001cc84d0) (0x4001a1e3c0) Stream removed, broadcasting: 1
I0827 15:59:41.836682       7 log.go:172] (0x4001cc84d0) Go away received
I0827 15:59:41.837153       7 log.go:172] (0x4001cc84d0) (0x4001a1e3c0) Stream removed, broadcasting: 1
I0827 15:59:41.837263       7 log.go:172] (0x4001cc84d0) (0x4000a1a140) Stream removed, broadcasting: 3
I0827 15:59:41.837393       7 log.go:172] (0x4001cc84d0) (0x4001a1e500) Stream removed, broadcasting: 5
Aug 27 15:59:41.837: INFO: Found all expected endpoints: [netserver-0]
Aug 27 15:59:42.143: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.138 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-62gvh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 15:59:42.143: INFO: >>> kubeConfig: /root/.kube/config
I0827 15:59:42.501732       7 log.go:172] (0x40015e6370) (0x4000893b80) Create stream
I0827 15:59:42.501858       7 log.go:172] (0x40015e6370) (0x4000893b80) Stream added, broadcasting: 1
I0827 15:59:42.504603       7 log.go:172] (0x40015e6370) Reply frame received for 1
I0827 15:59:42.504700       7 log.go:172] (0x40015e6370) (0x4001b80320) Create stream
I0827 15:59:42.504829       7 log.go:172] (0x40015e6370) (0x4001b80320) Stream added, broadcasting: 3
I0827 15:59:42.505838       7 log.go:172] (0x40015e6370) Reply frame received for 3
I0827 15:59:42.505956       7 log.go:172] (0x40015e6370) (0x4000893c20) Create stream
I0827 15:59:42.506018       7 log.go:172] (0x40015e6370) (0x4000893c20) Stream added, broadcasting: 5
I0827 15:59:42.507328       7 log.go:172] (0x40015e6370) Reply frame received for 5
I0827 15:59:43.566405       7 log.go:172] (0x40015e6370) Data frame received for 3
I0827 15:59:43.566580       7 log.go:172] (0x4001b80320) (3) Data frame handling
I0827 15:59:43.566696       7 log.go:172] (0x4001b80320) (3) Data frame sent
I0827 15:59:43.566803       7 log.go:172] (0x40015e6370) Data frame received for 3
I0827 15:59:43.566952       7 log.go:172] (0x40015e6370) Data frame received for 5
I0827 15:59:43.567136       7 log.go:172] (0x4000893c20) (5) Data frame handling
I0827 15:59:43.567268       7 log.go:172] (0x4001b80320) (3) Data frame handling
I0827 15:59:43.567550       7 log.go:172] (0x40015e6370) Data frame received for 1
I0827 15:59:43.567725       7 log.go:172] (0x4000893b80) (1) Data frame handling
I0827 15:59:43.567844       7 log.go:172] (0x4000893b80) (1) Data frame sent
I0827 15:59:43.567974       7 log.go:172] (0x40015e6370) (0x4000893b80) Stream removed, broadcasting: 1
I0827 15:59:43.568152       7 log.go:172] (0x40015e6370) Go away received
I0827 15:59:43.568509       7 log.go:172] (0x40015e6370) (0x4000893b80) Stream removed, broadcasting: 1
I0827 15:59:43.568612       7 log.go:172] (0x40015e6370) (0x4001b80320) Stream removed, broadcasting: 3
I0827 15:59:43.568697       7 log.go:172] (0x40015e6370) (0x4000893c20) Stream removed, broadcasting: 5
Aug 27 15:59:43.568: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 15:59:43.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-62gvh" for this suite.
Aug 27 16:00:13.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:00:13.779: INFO: namespace: e2e-tests-pod-network-test-62gvh, resource: bindings, ignored listing per whitelist
Aug 27 16:00:13.850: INFO: namespace e2e-tests-pod-network-test-62gvh deletion completed in 30.274838017s

• [SLOW TEST:76.347 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:00:13.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 16:00:14.329: INFO: Waiting up to 5m0s for pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-fhd5t" to be "success or failure"
Aug 27 16:00:14.550: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 220.826101ms
Aug 27 16:00:16.555: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225529261s
Aug 27 16:00:18.986: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6568559s
Aug 27 16:00:21.034: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704531058s
Aug 27 16:00:23.144: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 8.814993622s
Aug 27 16:00:25.149: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.819496494s
STEP: Saw pod success
Aug 27 16:00:25.149: INFO: Pod "downward-api-64d2178e-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 16:00:25.411: INFO: Trying to get logs from node hunter-worker pod downward-api-64d2178e-e87e-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 16:00:25.786: INFO: Waiting for pod downward-api-64d2178e-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 16:00:26.188: INFO: Pod downward-api-64d2178e-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:00:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fhd5t" for this suite.
Aug 27 16:00:34.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:00:34.514: INFO: namespace: e2e-tests-downward-api-fhd5t, resource: bindings, ignored listing per whitelist
Aug 27 16:00:34.517: INFO: namespace e2e-tests-downward-api-fhd5t deletion completed in 8.323364057s

• [SLOW TEST:20.667 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:00:34.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-70eebed1-e87e-11ea-894e-0242ac11000d
STEP: Creating a pod to test consume secrets
Aug 27 16:00:34.662: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-wthcs" to be "success or failure"
Aug 27 16:00:34.666: INFO: Pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507998ms
Aug 27 16:00:36.890: INFO: Pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228223372s
Aug 27 16:00:39.052: INFO: Pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390414323s
Aug 27 16:00:41.057: INFO: Pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395108392s
STEP: Saw pod success
Aug 27 16:00:41.057: INFO: Pod "pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 16:00:41.074: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 16:00:41.134: INFO: Waiting for pod pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 16:00:41.178: INFO: Pod pod-projected-secrets-70f0593a-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:00:41.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wthcs" for this suite.
Aug 27 16:00:47.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:00:47.300: INFO: namespace: e2e-tests-projected-wthcs, resource: bindings, ignored listing per whitelist
Aug 27 16:00:47.336: INFO: namespace e2e-tests-projected-wthcs deletion completed in 6.148810117s

• [SLOW TEST:12.818 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:00:47.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 27 16:00:55.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:00:55.555: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:00:57.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:00:57.675: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:00:59.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:00:59.561: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:01:01.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:01:01.563: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:01:03.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:01:03.562: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:01:05.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:01:05.805: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:01:07.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:01:07.563: INFO: Pod pod-with-poststart-http-hook still exists
Aug 27 16:01:09.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 27 16:01:09.561: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:01:09.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jpjtp" for this suite.
Aug 27 16:01:33.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:01:33.704: INFO: namespace: e2e-tests-container-lifecycle-hook-jpjtp, resource: bindings, ignored listing per whitelist
Aug 27 16:01:33.733: INFO: namespace e2e-tests-container-lifecycle-hook-jpjtp deletion completed in 24.162924094s

• [SLOW TEST:46.396 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:01:33.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0827 16:01:37.304200       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 16:01:37.304: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:01:37.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-rdjj2" for this suite.
Aug 27 16:01:44.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:01:44.308: INFO: namespace: e2e-tests-gc-rdjj2, resource: bindings, ignored listing per whitelist
Aug 27 16:01:44.360: INFO: namespace e2e-tests-gc-rdjj2 deletion completed in 7.046266802s

• [SLOW TEST:10.626 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:01:44.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-wmkt
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 16:01:44.936: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wmkt" in namespace "e2e-tests-subpath-x99q2" to be "success or failure"
Aug 27 16:01:45.162: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 224.920612ms
Aug 27 16:01:47.167: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230659438s
Aug 27 16:01:49.227: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290499207s
Aug 27 16:01:51.282: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344938051s
Aug 27 16:01:53.673: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.736570864s
Aug 27 16:01:55.681: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.743869108s
Aug 27 16:01:57.688: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751274246s
Aug 27 16:02:00.090: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.153323868s
Aug 27 16:02:02.319: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 17.382794193s
Aug 27 16:02:04.325: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 19.388822872s
Aug 27 16:02:06.331: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 21.394645437s
Aug 27 16:02:08.337: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 23.400737256s
Aug 27 16:02:10.561: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 25.624453671s
Aug 27 16:02:12.590: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 27.653688365s
Aug 27 16:02:14.596: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 29.658906975s
Aug 27 16:02:16.602: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Running", Reason="", readiness=false. Elapsed: 31.66499686s
Aug 27 16:02:18.670: INFO: Pod "pod-subpath-test-projected-wmkt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.733633603s
STEP: Saw pod success
Aug 27 16:02:18.671: INFO: Pod "pod-subpath-test-projected-wmkt" satisfied condition "success or failure"
Aug 27 16:02:19.004: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-wmkt container test-container-subpath-projected-wmkt: 
STEP: delete the pod
Aug 27 16:02:19.180: INFO: Waiting for pod pod-subpath-test-projected-wmkt to disappear
Aug 27 16:02:19.568: INFO: Pod pod-subpath-test-projected-wmkt no longer exists
STEP: Deleting pod pod-subpath-test-projected-wmkt
Aug 27 16:02:19.569: INFO: Deleting pod "pod-subpath-test-projected-wmkt" in namespace "e2e-tests-subpath-x99q2"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:02:19.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-x99q2" for this suite.
Aug 27 16:02:31.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:02:31.692: INFO: namespace: e2e-tests-subpath-x99q2, resource: bindings, ignored listing per whitelist
Aug 27 16:02:31.775: INFO: namespace e2e-tests-subpath-x99q2 deletion completed in 12.161581011s

• [SLOW TEST:47.415 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:02:31.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 16:02:32.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:02:40.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xrw6q" for this suite.
Aug 27 16:03:23.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:03:23.168: INFO: namespace: e2e-tests-pods-xrw6q, resource: bindings, ignored listing per whitelist
Aug 27 16:03:23.218: INFO: namespace e2e-tests-pods-xrw6q deletion completed in 42.545018669s

• [SLOW TEST:51.443 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:03:23.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 27 16:03:25.463: INFO: Waiting up to 5m0s for pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-emptydir-4kmdt" to be "success or failure"
Aug 27 16:03:25.801: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 337.747044ms
Aug 27 16:03:27.805: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341584164s
Aug 27 16:03:29.810: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346301967s
Aug 27 16:03:32.172: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 6.708658047s
Aug 27 16:03:34.178: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.714742596s
STEP: Saw pod success
Aug 27 16:03:34.178: INFO: Pod "pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 16:03:34.182: INFO: Trying to get logs from node hunter-worker2 pod pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d container test-container: 
STEP: delete the pod
Aug 27 16:03:34.213: INFO: Waiting for pod pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 16:03:34.303: INFO: Pod pod-d6b4c9e3-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:03:34.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4kmdt" for this suite.
Aug 27 16:03:40.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:03:40.416: INFO: namespace: e2e-tests-emptydir-4kmdt, resource: bindings, ignored listing per whitelist
Aug 27 16:03:40.443: INFO: namespace e2e-tests-emptydir-4kmdt deletion completed in 6.134056897s

• [SLOW TEST:17.224 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:03:40.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 27 16:03:40.576: INFO: Waiting up to 5m0s for pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-p9twd" to be "success or failure"
Aug 27 16:03:40.580: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.892847ms
Aug 27 16:03:42.587: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010916376s
Aug 27 16:03:44.723: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147124127s
Aug 27 16:03:46.806: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 6.230153022s
Aug 27 16:03:48.846: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.269785458s
STEP: Saw pod success
Aug 27 16:03:48.846: INFO: Pod "downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 16:03:48.937: INFO: Trying to get logs from node hunter-worker2 pod downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d container dapi-container: 
STEP: delete the pod
Aug 27 16:03:49.871: INFO: Waiting for pod downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 16:03:50.108: INFO: Pod downward-api-dfbfd734-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:03:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p9twd" for this suite.
Aug 27 16:03:59.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:03:59.773: INFO: namespace: e2e-tests-downward-api-p9twd, resource: bindings, ignored listing per whitelist
Aug 27 16:03:59.836: INFO: namespace e2e-tests-downward-api-p9twd deletion completed in 9.319448188s

• [SLOW TEST:19.391 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:03:59.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 27 16:04:00.151: INFO: Creating deployment "test-recreate-deployment"
Aug 27 16:04:00.196: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 27 16:04:00.325: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 27 16:04:02.365: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 27 16:04:02.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 16:04:04.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734141040, loc:(*time.Location)(0x70e3ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 16:04:06.374: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 27 16:04:06.384: INFO: Updating deployment test-recreate-deployment
Aug 27 16:04:06.384: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 27 16:04:07.240: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-gj5fr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gj5fr/deployments/test-recreate-deployment,UID:eb6d47e6-e87e-11ea-a485-0242ac120004,ResourceVersion:2665032,Generation:2,CreationTimestamp:2020-08-27 16:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-27 16:04:06 +0000 UTC 2020-08-27 16:04:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-27 16:04:07 +0000 UTC 2020-08-27 16:04:00 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 16:04:07.248: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-gj5fr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gj5fr/replicasets/test-recreate-deployment-589c4bfd,UID:ef368ad1-e87e-11ea-a485-0242ac120004,ResourceVersion:2665030,Generation:1,CreationTimestamp:2020-08-27 16:04:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment eb6d47e6-e87e-11ea-a485-0242ac120004 0x4001a40a2f 0x4001a40a40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 16:04:07.248: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 27 16:04:07.249: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-gj5fr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gj5fr/replicasets/test-recreate-deployment-5bf7f65dc,UID:eb81947d-e87e-11ea-a485-0242ac120004,ResourceVersion:2665020,Generation:2,CreationTimestamp:2020-08-27 16:04:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment eb6d47e6-e87e-11ea-a485-0242ac120004 0x4001a40b60 0x4001a40b61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 27 16:04:07.255: INFO: Pod "test-recreate-deployment-589c4bfd-rs997" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-rs997,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-gj5fr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gj5fr/pods/test-recreate-deployment-589c4bfd-rs997,UID:ef3995be-e87e-11ea-a485-0242ac120004,ResourceVersion:2665031,Generation:0,CreationTimestamp:2020-08-27 16:04:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd ef368ad1-e87e-11ea-a485-0242ac120004 0x40022df5df 0x40022df5f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-df9xx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-df9xx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-df9xx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x40022df6d0} {node.kubernetes.io/unreachable Exists  NoExecute 0x40022df6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 16:04:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 16:04:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 16:04:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 16:04:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-27 16:04:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:04:07.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gj5fr" for this suite.
Aug 27 16:04:13.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:04:13.472: INFO: namespace: e2e-tests-deployment-gj5fr, resource: bindings, ignored listing per whitelist
Aug 27 16:04:13.512: INFO: namespace e2e-tests-deployment-gj5fr deletion completed in 6.231427833s

• [SLOW TEST:13.675 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:04:13.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 27 16:04:13.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d" in namespace "e2e-tests-downward-api-pfjlg" to be "success or failure"
Aug 27 16:04:13.940: INFO: Pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 112.233379ms
Aug 27 16:04:15.947: INFO: Pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119187587s
Aug 27 16:04:17.973: INFO: Pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.145591552s
Aug 27 16:04:19.979: INFO: Pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151542718s
STEP: Saw pod success
Aug 27 16:04:19.979: INFO: Pod "downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d" satisfied condition "success or failure"
Aug 27 16:04:19.984: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d container client-container: 
STEP: delete the pod
Aug 27 16:04:20.015: INFO: Waiting for pod downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d to disappear
Aug 27 16:04:20.054: INFO: Pod downwardapi-volume-f38eae04-e87e-11ea-894e-0242ac11000d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:04:20.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pfjlg" for this suite.
Aug 27 16:04:26.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:04:26.151: INFO: namespace: e2e-tests-downward-api-pfjlg, resource: bindings, ignored listing per whitelist
Aug 27 16:04:26.194: INFO: namespace e2e-tests-downward-api-pfjlg deletion completed in 6.131400741s

• [SLOW TEST:12.681 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:04:26.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:04:30.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jhpmh" for this suite.
Aug 27 16:04:36.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:04:36.540: INFO: namespace: e2e-tests-emptydir-wrapper-jhpmh, resource: bindings, ignored listing per whitelist
Aug 27 16:04:36.623: INFO: namespace e2e-tests-emptydir-wrapper-jhpmh deletion completed in 6.158215531s

• [SLOW TEST:10.428 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:04:36.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-vftxn
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-vftxn
STEP: Deleting pre-stop pod
Aug 27 16:04:49.857: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 27 16:04:49.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-vftxn" for this suite.
Aug 27 16:05:28.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 27 16:05:28.944: INFO: namespace: e2e-tests-prestop-vftxn, resource: bindings, ignored listing per whitelist
Aug 27 16:05:30.027: INFO: namespace e2e-tests-prestop-vftxn deletion completed in 40.141192053s

• [SLOW TEST:53.403 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 27 16:05:30.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Aug 27 16:05:32.184: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 27 16:05:32.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:05:44.331: INFO: stderr: ""
Aug 27 16:05:44.332: INFO: stdout: "service/redis-slave created\n"
Aug 27 16:05:44.333: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 27 16:05:44.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:05:47.402: INFO: stderr: ""
Aug 27 16:05:47.402: INFO: stdout: "service/redis-master created\n"
Aug 27 16:05:47.403: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 27 16:05:47.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:05:50.273: INFO: stderr: ""
Aug 27 16:05:50.273: INFO: stdout: "service/frontend created\n"
Aug 27 16:05:50.275: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 27 16:05:50.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:05:52.152: INFO: stderr: ""
Aug 27 16:05:52.153: INFO: stdout: "deployment.extensions/frontend created\n"
Aug 27 16:05:52.154: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 27 16:05:52.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:05:56.754: INFO: stderr: ""
Aug 27 16:05:56.754: INFO: stdout: "deployment.extensions/redis-master created\n"
Aug 27 16:05:56.756: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 27 16:05:56.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hjpzp'
Aug 27 16:06:00.285: INFO: stderr: ""
Aug 27 16:06:00.285: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Aug 27 16:06:00.285: INFO: Waiting for all frontend pods to be Running.
Aug 27 16:06:10.338: INFO: Waiting for frontend to serve content.
Aug 27 16:06:11.569: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Aug 27 16:06:16.595: INFO: Trying to add a new entry to the guestbook. Aug 27 16:06:16.606: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 27 16:06:16.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:17.947: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:17.947: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 27 16:06:17.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:19.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:19.885: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 27 16:06:19.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:22.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:22.147: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 27 16:06:22.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:23.475: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:23.475: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 27 16:06:23.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:25.182: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:25.182: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 27 16:06:25.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hjpzp' Aug 27 16:06:27.130: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 27 16:06:27.131: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:06:27.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hjpzp" for this suite. Aug 27 16:07:09.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:07:09.458: INFO: namespace: e2e-tests-kubectl-hjpzp, resource: bindings, ignored listing per whitelist Aug 27 16:07:09.540: INFO: namespace e2e-tests-kubectl-hjpzp deletion completed in 42.397600116s • [SLOW TEST:99.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:07:09.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Aug 27 16:07:10.507: INFO: Waiting up to 5m0s for pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d" in namespace "e2e-tests-containers-jwdlt" to be "success or failure" Aug 27 16:07:10.523: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.121523ms Aug 27 16:07:12.529: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021485756s Aug 27 16:07:14.535: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027131883s Aug 27 16:07:16.552: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04473065s Aug 27 16:07:18.564: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056667365s STEP: Saw pod success Aug 27 16:07:18.564: INFO: Pod "client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:07:18.613: INFO: Trying to get logs from node hunter-worker pod client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d container test-container: STEP: delete the pod Aug 27 16:07:18.800: INFO: Waiting for pod client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d to disappear Aug 27 16:07:19.379: INFO: Pod client-containers-5cbde79e-e87f-11ea-894e-0242ac11000d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:07:19.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jwdlt" for this suite. Aug 27 16:07:28.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:07:28.156: INFO: namespace: e2e-tests-containers-jwdlt, resource: bindings, ignored listing per whitelist Aug 27 16:07:28.205: INFO: namespace e2e-tests-containers-jwdlt deletion completed in 8.333891522s • [SLOW TEST:18.664 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:07:28.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-68610b8c-e87f-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 16:07:30.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-gcjrt" to be "success or failure" Aug 27 16:07:30.372: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 118.391801ms Aug 27 16:07:32.378: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124424911s Aug 27 16:07:34.386: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131527604s Aug 27 16:07:36.393: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138824764s Aug 27 16:07:38.401: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146913494s STEP: Saw pod success Aug 27 16:07:38.401: INFO: Pod "pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:07:38.406: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d container projected-configmap-volume-test: STEP: delete the pod Aug 27 16:07:38.511: INFO: Waiting for pod pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d to disappear Aug 27 16:07:38.529: INFO: Pod pod-projected-configmaps-6861d607-e87f-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:07:38.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gcjrt" for this suite. Aug 27 16:07:44.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:07:44.663: INFO: namespace: e2e-tests-projected-gcjrt, resource: bindings, ignored listing per whitelist Aug 27 16:07:44.736: INFO: namespace e2e-tests-projected-gcjrt deletion completed in 6.197602174s • [SLOW TEST:16.529 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:07:44.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8h27t/configmap-test-716c2419-e87f-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 16:07:44.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-8h27t" to be "success or failure" Aug 27 16:07:45.067: INFO: Pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 75.205806ms Aug 27 16:07:47.227: INFO: Pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235103072s Aug 27 16:07:49.233: INFO: Pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.241106867s Aug 27 16:07:51.240: INFO: Pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248090463s STEP: Saw pod success Aug 27 16:07:51.240: INFO: Pod "pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:07:51.245: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d container env-test: STEP: delete the pod Aug 27 16:07:51.280: INFO: Waiting for pod pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d to disappear Aug 27 16:07:51.342: INFO: Pod pod-configmaps-716eb0e1-e87f-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:07:51.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8h27t" for this suite. Aug 27 16:07:57.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:07:57.636: INFO: namespace: e2e-tests-configmap-8h27t, resource: bindings, ignored listing per whitelist Aug 27 16:07:57.697: INFO: namespace e2e-tests-configmap-8h27t deletion completed in 6.345273778s • [SLOW TEST:12.958 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:07:57.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Aug 27 16:07:57.976: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-pj5kh" to be "success or failure" Aug 27 16:07:58.009: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 32.080036ms Aug 27 16:08:00.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414432502s Aug 27 16:08:02.865: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.888424628s Aug 27 16:08:04.873: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.89647269s Aug 27 16:08:06.880: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.903674826s STEP: Saw pod success Aug 27 16:08:06.881: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 27 16:08:06.886: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 27 16:08:06.950: INFO: Waiting for pod pod-host-path-test to disappear Aug 27 16:08:07.103: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:08:07.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-pj5kh" for this suite. Aug 27 16:08:13.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:08:13.919: INFO: namespace: e2e-tests-hostpath-pj5kh, resource: bindings, ignored listing per whitelist Aug 27 16:08:14.011: INFO: namespace e2e-tests-hostpath-pj5kh deletion completed in 6.877840942s • [SLOW TEST:16.310 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:08:14.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-8316ee5a-e87f-11ea-894e-0242ac11000d STEP: Creating secret with name s-test-opt-upd-8316f5fa-e87f-11ea-894e-0242ac11000d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8316ee5a-e87f-11ea-894e-0242ac11000d STEP: Updating secret s-test-opt-upd-8316f5fa-e87f-11ea-894e-0242ac11000d STEP: Creating secret with name s-test-opt-create-8316fc39-e87f-11ea-894e-0242ac11000d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:08:25.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tthtv" for this suite. Aug 27 16:08:49.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:08:49.604: INFO: namespace: e2e-tests-secrets-tthtv, resource: bindings, ignored listing per whitelist Aug 27 16:08:49.653: INFO: namespace e2e-tests-secrets-tthtv deletion completed in 24.321017121s • [SLOW TEST:35.639 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:08:49.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 16:08:50.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Aug 27 16:08:51.265: INFO: stderr: "" Aug 27 16:08:51.265: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:38:29Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/arm64\"}\n" Aug 27 16:08:51.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wxrzx' Aug 27 16:08:53.004: INFO: stderr: "" Aug 27 16:08:53.004: INFO: stdout: "replicationcontroller/redis-master created\n" Aug 27 16:08:53.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wxrzx' Aug 27 16:08:55.013: INFO: stderr: "" Aug 27 16:08:55.013: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Aug 27 16:08:56.440: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:08:56.440: INFO: Found 0 / 1 Aug 27 16:08:57.020: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:08:57.020: INFO: Found 0 / 1 Aug 27 16:08:58.020: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:08:58.020: INFO: Found 0 / 1 Aug 27 16:08:59.057: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:08:59.057: INFO: Found 0 / 1 Aug 27 16:09:00.018: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:09:00.018: INFO: Found 1 / 1 Aug 27 16:09:00.018: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 27 16:09:00.021: INFO: Selector matched 1 pods for map[app:redis] Aug 27 16:09:00.021: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 27 16:09:00.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-p68tx --namespace=e2e-tests-kubectl-wxrzx' Aug 27 16:09:01.429: INFO: stderr: "" Aug 27 16:09:01.430: INFO: stdout: "Name: redis-master-p68tx\nNamespace: e2e-tests-kubectl-wxrzx\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.18.0.8\nStart Time: Thu, 27 Aug 2020 16:08:53 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.36\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://37a3be2c77c98297f65141334767ed09d1d72c575f6187c2ced3bb5a4d3d35c3\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 27 Aug 2020 16:08:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xdxk7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xdxk7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xdxk7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned e2e-tests-kubectl-wxrzx/redis-master-p68tx to hunter-worker2\n Normal Pulled 7s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, hunter-worker2 Created container\n Normal Started 3s kubelet, hunter-worker2 Started container\n" Aug 27 16:09:01.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-wxrzx' Aug 27 16:09:02.882: INFO: stderr: "" Aug 27 16:09:02.883: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-wxrzx\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-p68tx\n" Aug 27 16:09:02.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-wxrzx' Aug 27 16:09:04.269: INFO: stderr: "" Aug 27 16:09:04.270: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-wxrzx\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.62.209\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.36:6379\nSession Affinity: None\nEvents: \n" Aug 27 16:09:04.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Aug 27 16:09:05.822: INFO: stderr: "" Aug 27 16:09:05.823: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:32:36 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 27 Aug 2020 16:09:00 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 27 Aug 2020 16:09:00 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 27 Aug 2020 16:09:00 +0000 Sat, 15 Aug 2020 09:32:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 27 Aug 2020 16:09:00 +0000 Sat, 15 Aug 2020 09:33:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.4\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 403efd4ae68744eab619e7055020cc3f\n System UUID: dafd70bf-eb1f-4422-b415-7379320414ca\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-7rfjf 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-54ff9cd656-n4q2v 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-kjrwt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-5tp66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-674595c7-srvmc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 27 16:09:05.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-wxrzx' Aug 27 16:09:07.219: INFO: stderr: "" Aug 27 16:09:07.219: INFO: stdout: "Name: e2e-tests-kubectl-wxrzx\nLabels: e2e-framework=kubectl\n e2e-run=db339393-e86b-11ea-894e-0242ac11000d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:09:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wxrzx" for this suite. Aug 27 16:09:31.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:09:32.269: INFO: namespace: e2e-tests-kubectl-wxrzx, resource: bindings, ignored listing per whitelist Aug 27 16:09:32.427: INFO: namespace e2e-tests-kubectl-wxrzx deletion completed in 25.199003783s • [SLOW TEST:42.772 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:09:32.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:09:39.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2v9th" for this suite. Aug 27 16:09:47.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:09:48.140: INFO: namespace: e2e-tests-namespaces-2v9th, resource: bindings, ignored listing per whitelist Aug 27 16:09:48.224: INFO: namespace e2e-tests-namespaces-2v9th deletion completed in 8.605057005s STEP: Destroying namespace "e2e-tests-nsdeletetest-bsszt" for this suite. Aug 27 16:09:48.226: INFO: Namespace e2e-tests-nsdeletetest-bsszt was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-v8f7d" for this suite. Aug 27 16:09:54.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:09:54.415: INFO: namespace: e2e-tests-nsdeletetest-v8f7d, resource: bindings, ignored listing per whitelist Aug 27 16:09:54.457: INFO: namespace e2e-tests-nsdeletetest-v8f7d deletion completed in 6.230388555s • [SLOW TEST:22.028 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:09:54.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Aug 27 16:10:03.721: INFO: Pod pod-hostip-bec6d775-e87f-11ea-894e-0242ac11000d has hostIP: 172.18.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:10:03.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2v2qj" for this suite. Aug 27 16:10:32.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:10:32.185: INFO: namespace: e2e-tests-pods-2v2qj, resource: bindings, ignored listing per whitelist Aug 27 16:10:32.304: INFO: namespace e2e-tests-pods-2v2qj deletion completed in 28.575589664s • [SLOW TEST:37.845 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:10:32.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-d647dbfd-e87f-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 16:10:34.226: INFO: Waiting up to 5m0s for pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-n2kh9" to be "success or failure" Aug 27 16:10:34.405: INFO: Pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 178.873723ms Aug 27 16:10:36.423: INFO: Pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197364709s Aug 27 16:10:38.430: INFO: Pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204489854s Aug 27 16:10:40.437: INFO: Pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211341393s STEP: Saw pod success Aug 27 16:10:40.437: INFO: Pod "pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:10:40.442: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d container secret-volume-test: STEP: delete the pod Aug 27 16:10:40.494: INFO: Waiting for pod pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d to disappear Aug 27 16:10:40.613: INFO: Pod pod-secrets-d64e7039-e87f-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:10:40.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n2kh9" for this suite. Aug 27 16:10:46.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:10:46.728: INFO: namespace: e2e-tests-secrets-n2kh9, resource: bindings, ignored listing per whitelist Aug 27 16:10:46.800: INFO: namespace e2e-tests-secrets-n2kh9 deletion completed in 6.176306408s • [SLOW TEST:14.494 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:10:46.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 27 16:10:53.780: INFO: Successfully updated pod "annotationupdatedde7a204-e87f-11ea-894e-0242ac11000d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:10:58.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h2zzt" for this suite. Aug 27 16:11:22.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:11:22.190: INFO: namespace: e2e-tests-projected-h2zzt, resource: bindings, ignored listing per whitelist Aug 27 16:11:22.330: INFO: namespace e2e-tests-projected-h2zzt deletion completed in 24.216621584s • [SLOW TEST:35.529 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:11:22.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 27 16:11:37.182: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:37.200: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:39.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:39.441: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:41.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:41.206: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:43.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:43.207: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:45.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:45.209: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:47.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:47.207: INFO: Pod pod-with-prestop-http-hook still exists Aug 27 16:11:49.201: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 27 16:11:49.208: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:11:49.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mgn88" for this suite. Aug 27 16:12:11.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:12:11.681: INFO: namespace: e2e-tests-container-lifecycle-hook-mgn88, resource: bindings, ignored listing per whitelist Aug 27 16:12:11.745: INFO: namespace e2e-tests-container-lifecycle-hook-mgn88 deletion completed in 22.488983552s • [SLOW TEST:49.414 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:12:11.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 27 16:12:11.918: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:12:18.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wm6mz" for this suite. Aug 27 16:13:05.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:13:06.261: INFO: namespace: e2e-tests-pods-wm6mz, resource: bindings, ignored listing per whitelist Aug 27 16:13:06.272: INFO: namespace e2e-tests-pods-wm6mz deletion completed in 47.771718824s • [SLOW TEST:54.525 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:13:06.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-31a6bb9f-e880-11ea-894e-0242ac11000d STEP: Creating a pod to test consume configMaps Aug 27 16:13:07.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d" in namespace "e2e-tests-configmap-4r7fr" to be "success or failure" Aug 27 16:13:08.219: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 676.835378ms Aug 27 16:13:10.225: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682206544s Aug 27 16:13:12.231: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689062416s Aug 27 16:13:14.237: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695032378s Aug 27 16:13:16.694: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.151495959s STEP: Saw pod success Aug 27 16:13:16.694: INFO: Pod "pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:13:17.082: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d container configmap-volume-test: STEP: delete the pod Aug 27 16:13:17.598: INFO: Waiting for pod pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d to disappear Aug 27 16:13:17.609: INFO: Pod pod-configmaps-31b0ca26-e880-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:13:17.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4r7fr" for this suite. Aug 27 16:13:23.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:13:23.715: INFO: namespace: e2e-tests-configmap-4r7fr, resource: bindings, ignored listing per whitelist Aug 27 16:13:23.799: INFO: namespace e2e-tests-configmap-4r7fr deletion completed in 6.182714785s • [SLOW TEST:17.527 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:13:23.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3b8a88db-e880-11ea-894e-0242ac11000d STEP: Creating a pod to test consume secrets Aug 27 16:13:24.104: INFO: Waiting up to 5m0s for pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d" in namespace "e2e-tests-secrets-x7865" to be "success or failure" Aug 27 16:13:24.159: INFO: Pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.966098ms Aug 27 16:13:26.502: INFO: Pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397839325s Aug 27 16:13:28.507: INFO: Pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403488823s Aug 27 16:13:30.513: INFO: Pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.408863305s STEP: Saw pod success Aug 27 16:13:30.513: INFO: Pod "pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:13:30.517: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d container secret-env-test: STEP: delete the pod Aug 27 16:13:30.538: INFO: Waiting for pod pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d to disappear Aug 27 16:13:30.593: INFO: Pod pod-secrets-3b8db238-e880-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:13:30.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x7865" for this suite. Aug 27 16:13:36.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:13:36.705: INFO: namespace: e2e-tests-secrets-x7865, resource: bindings, ignored listing per whitelist Aug 27 16:13:36.736: INFO: namespace e2e-tests-secrets-x7865 deletion completed in 6.135246793s • [SLOW TEST:12.935 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 27 16:13:36.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 27 16:13:37.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d" in namespace "e2e-tests-projected-pt585" to be "success or failure" Aug 27 16:13:37.043: INFO: Pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.779868ms Aug 27 16:13:39.447: INFO: Pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42775752s Aug 27 16:13:41.454: INFO: Pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d": Phase="Running", Reason="", readiness=true. Elapsed: 4.43455989s Aug 27 16:13:43.461: INFO: Pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441719315s STEP: Saw pod success Aug 27 16:13:43.461: INFO: Pod "downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d" satisfied condition "success or failure" Aug 27 16:13:43.466: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d container client-container: STEP: delete the pod Aug 27 16:13:43.553: INFO: Waiting for pod downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d to disappear Aug 27 16:13:43.575: INFO: Pod downwardapi-volume-4335d21f-e880-11ea-894e-0242ac11000d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 27 16:13:43.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pt585" for this suite. Aug 27 16:13:51.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 27 16:13:51.703: INFO: namespace: e2e-tests-projected-pt585, resource: bindings, ignored listing per whitelist Aug 27 16:13:51.767: INFO: namespace e2e-tests-projected-pt585 deletion completed in 8.1844893s • [SLOW TEST:15.030 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSAug 27 16:13:51.769: INFO: Running AfterSuite actions on all nodes Aug 27 16:13:51.770: INFO: Running AfterSuite actions on node 1 Aug 27 16:13:51.770: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 8773.740 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS